I open this thread because I want to make sure I didn’t missed something about StreamBuffers (and message buffers), I will refer to them as SBs.
Lets say I create a SB with a buffer size of 100, and timeout is infinite
Scenario 1 : The SB is empty, I want to write let’s say 90 bytes. Then everything works fine
Scenario 2 : The SB is filled with 50 bytes, I want to write 55 bytes. The sending task blocks, eventually someone receive bytes, and the sender then unblocks, I write my 55 bytes, and they get received later. Everything okay.
Scenario 3 : The SB is empty, I want to write 110 bytes. I want to write 10 bytes more than the whole capacity of the buffer. Then, the task waits indefinitively… Thats because the sending tasks, in SBSend, checks for the space available in the buffer and notifyWaits if ‘xSpace < xRequiredSpace’. There is a bug here… The sending task can query the space available in the SB, yes, but can’t really know the total capacity of the buffer. So if I send more bytes than the total capacity, I brick the thing.
In my opinion, the case where someone wants to send more bytes than feasible has not been taken into account. It should write the chunk of data that it can at the moment and return the number of bytes actually writen, like in the scenario where a finite timeout occurs becore enough space is free.
Thanks for this information. I think there is an assumption that the developer that creates the stream buffer knows how big it is, but I agree with you, if the behaviour is as reported (not double checked but assume so) this is not a good experience. We will add this to our backlog. If you would like a faster fix then I would encourage you to create a pull request in the Github repo.
One thought, doesn’t the message buffer implementation require that message transmission either full succeed or fully fail? Once it puts the message length in the stream, I would think it NEEDS to get the rest of the message into the stream or the read side will get out of sync.
I think there is an assumption that the developer that creates the stream buffer knows how big it is
In a sense yes. I use them in a lot of scenarios where the stream/message buffer is kind of buried in my module code. That problem arose in a situation where the stream/buffer was an interface between my module and a collegue’s piece of code. Here, the size of the stream/buffer was not shared between both parties. It was easy to spot this behavior with tracealyzer, but it’s very subtile and I can’t imagine how much time something like this could take with printfs… Intuitively I thought that would have been the behavior. I knew that It was possible in my scenario to try to write more bytes to the stream buffer than it’s capacity, so I looped around the call to flush all data in repetitive calls because of the documented behavior of the stream buffer when a timeout occurs : it writes what it can and return after the timeout the number of bytes it was able to read.
One thought, doesn’t the message buffer implementation require that message transmission either full succeed or fully fail? Once it puts the message length in the stream, I would think it NEEDS to get the rest of the message into the stream or the read side will get out of sync.
From what I can see, the xMessageBufferSend doesnt write if timeout occured and was not enough space for the whole message. The check is done in internal prvWriteMessageToBuffer function. But still, the problem of wanting to write more data than the size of the buffer is still present.
Now, the solution was easy for me : instread of calling streamBufferSend in loop and compare against the writtenBytes, I loop with a maximum length of the internal buffer. But of course, it means that the buffer size is known to the sender. It’s not a big worry, but it’s something that should have been documented. The freertos documentation is very complete and comprehensive and I heavily rely on it, hence my surprise of not finding that information!
Note the head revision in Git now has multiple direct to task notifications per task. Stream and message buffers, which use direct to task notifications, continue to use direct to task notification 0 (the first of those available) - so you will find the stream buffer code in Git slightly modified from that in the release.
My first comment was that if message buffers just used ordinary stream buffers wrapped in some code to add/retreave it would have a problem with partial writes, but in actuality, the buffer knows if it is a message buffer or a stream buffer, so only does partial writes for plain stream buffers.
It looks like the goal of the existing code is to try and make sure that the stream of bytes is written as a single chunk, so (assuming a big enough receive buffer) the entire message will be retreaved at once, and not broken into multiple pieces, which might make some forms of processing more difficult. If the receiver has a buffer as big as the stream buffer, while it may be multiple sends in a single buffer, it will never get a send split across multiple reads. This could be a useful feature.
That says that if you try to send more that the buffer can hold, it looks like it will wait till timeout, and then send the the buffer capacity in bytes, and then return the partial write.
I suppose that a work around would be to have the space check loop have a special case for more bytes to send than there is space available AND the buffer is empty, do an intermediary prvWriteMessageToBuffer to send one buffer full, adjust the parameters to reflect that write, and then continue waiting.
@rtel
Yes, I notices that. It’s a great idea because I found out at my depends that there could be undesired side effects with the actual freertos code I use : If a task relied on direct task notification (for exemple, ISR -> task notification) and uses stream buffer too, there could be confusion and missed events.
My first comment was that if message buffers just used ordinary stream buffers wrapped in some code to add/retreave it would have a problem with partial writes, but in actuality, the buffer knows if it is a message buffer or a stream buffer, so only does partial writes for plain stream buffers.
Indeed, there would have been a problem. But stream buffer send/receive has the necessary flag check to differentiate between a stream buffer and a message buffer and doesnt allow to to partial write for the message buffer in case of a timeout, which is perfect.
It looks like the goal of the existing code is to try and make sure that the stream of bytes is written as a single chunk, so (assuming a big enough receive buffer) the entire message will be retreaved at once, and not broken into multiple pieces, which might make some forms of processing more difficult.
Indeed, it tries to write it in one shot. In case of the stream buffer, I think it makes no difference because the stream buffer assumes a stream with no delimitations (like sound samples maybe). Thats not the assumption with message buffer (which could be command strings for exemple, and it HAS to be ligicaly delimited.
That says that if you try to send more that the buffer can hold, it looks like it will wait till timeout, and then send the the buffer capacity in bytes, and then return the partial write.
In the current implementation, that’s what it will do, ideed. But that’s kind of funky to have to wait till timeout to finally ‘realise’ that only a partial write can be done (only for stream buffer). It’s a waste of time to have to wait till timeout, and a deadlock if timeout is infinite.
I suppose that a work around would be to have the space check loop have a special case for more bytes to send than there is space available AND the buffer is empty, do an intermediary prvWriteMessageToBuffer to send one buffer full, adjust the parameters to reflect that write, and then continue waiting.
I’m working on a piece of code that completes the message buffer flag check at the top of the streamBufferSend function that will do that. A pull request will be issued shortly
The one shot writing to the stream buffer might make a difference of efficiency in the reading routine. Say you are sending UTF-8 character streams, and the sender makes sure to always be sending blocks that are full grapheme clusters. If the reading code always got the full buffer at a time, it would know that the received data would always, or at least normally, end on a grapheme cluster boundary. Because of the possibility of getting partial writes, it may need code to handle the case of a partial sequence, and need to move that partial cluster from the end of the buffer to the beginning, and then fill in after it. but most of the time it wouldn’t need to do that.
somedays ago,the link thread ask the same question as yours How to deal with this scenario about messagebuffer. only Mr. htibosch give a valid resolution,which is implement from app or user point of view,not from kernel point of view.
@jiuhao2019 I dont think we are talking about the same thing, tho it’s kind of related. I think you were trying to fetch a message from a message buffer and provided a receiving buffer that was too small. The reception function would return immediately and you would loop infinitely with a high priority task, thus locking your system. This is a bit different
Sorry, correct the word “same” to “related”.
you know ,i mean the point is the reason which result in the system wrong logic by FreeRTOS’S message buffer under specific condition.
I see! But, in your scenario, I think the documentation states the behavior in message buffer :
If the length of the message is greater than xBufferLengthBytes then the message will be left in the message buffer and zero is returned.
In my case, the documentation doesn’t state anything about the scenario where the user passes a longer message/stream than the underlying buffer…
A simpler solution than code change could simply be to adapt the documentation and add information about that limit case. But still, no “error” value is returned to the user; it’s task is deadlocked, as oposed to return 0.
Your method looks different than mine. It looks like you change the behavior to write what you can as soon as you can, while the method I described was to wait for the buffer to empty, then fill it. You method can result in a LOT more reactivations of the sending tasks (costing CPU time). Also, your method does look to switch back to the standard method once you have sent enough bytes that the remainder will fit the stream buffer.
I would probably just do an assert on the case of this problem on a MessageBuffer (rather than always returning a 0) as it is very hard to imagine a case where you would trigger this case other than a program error. (The receive side version of the error is maybe different).
While the documentation doesn’t explicitly state that you will get a failure when trying to send a message bigger than the message buffer, the detail that a message buffer send will always send all or nothing sort of implies this.
Asserting in the case of the message buffer is a good idea. I think its a good compromise. Better than to silently wait indefinitely.
Anyway, now you are aware of it. I will let you guys deal with this detail the way you see fit, I trust your jugment. Thanks for providing this good quality code!
… so I also add a little statement.
I also think message buffer is different from stream buffer.
Message buffer should be all or nothing. Maybe with an FreeRTOS internal assert to reduce boiler plate assert code in applications. There is no use case for partial messages. and also in my opinion this is an application error.
I think either stream buffers should handle the case where the amount of data exceeds the configured capacity either internally by just streaming all the requested data (in chunks) until done or error out early.
It’s a question of interface/behavior definition.
For the 1st case it should be documented that selecting a matching capacity is a compromise between better runtime performance (large enough: no/less chunks) and memory consumption (less reserved stream buffer storage).
That way stream buffers would provide a similar interface and behavior like TCP sockets which wouldn’t be too bad.
The point for the 2nd approach is less code of the stream buffer implementation as it seems that usually stream buffer capacity is not an issue. Of course it shouldn’t deadlock when called with inappropriate arguments.
Just my 2ct
@hs2 I think that too. I often write code for linux kernel module, and it’s mandatory in the kernel to deal with every possible scenario in a function and return at leat an error code. Bricking a thread is a BUG and has to be corrected. Being a kernel module code that is loaded dynamically, you cannot make assumptions about anything. I understand that in a RTOS world, you compile the whole code at once and you should be able to know the sice of the buffer, but still, I think it would be good practice.
And yes @hs2, message buffer is different from stream buffer, and it HAS to be all or nothing. Now that I think about it, an assert should be the right thing to do. But in the case of a stream buffer, I don’t think it’s not necessarly an error scenario to send more data than feasable and should be delt with like you mentioned. TCP socket is a good analogy.