I’m using the stream buffer to do serial I/O. The device I’m talking to is a command/response type device, so typically we’ll transmit a message, then call xStreamBufferReceive to receive the response. Our stream buffer is setup to receive before we transmit so we can catch any data that comes during transmission, which can happen if our device detects an error in our command.
In stream_buffer.c, xStreamBufferReceive():
if( xBytesAvailable <= xBytesToStoreMessageLength )
{
/* Wait for data to be available. */
traceBLOCKING_ON_STREAM_BUFFER_RECEIVE( xStreamBuffer );
( void ) xTaskNotifyWait( ( uint32_t ) 0, ( uint32_t ) 0, NULL, xTicksToWait );
pxStreamBuffer->xTaskWaitingToReceive = NULL;
/* Recheck the data available after blocking. */
xBytesAvailable = prvBytesInBuffer( pxStreamBuffer );
}
This if checks if xBytesAvailable, the received number of bytes, is less than xbytesToStoreMessageLength. This looks wrong. as we want to receive the number of bytes specified we specified by setting xStreamBuffer->xTriggerLevelBytes.
If we haven’t received any bytes before we call this function, it works because xBytesAvailable is 0, and 0 is <= 0 (xBytesToStoreMessageLength is 0 since we’re using the stream buffer directly, not via message buffers). However, when we received bytes during transmission, xBytesAvailable is > 0, so this condition is false, and xStreamBufferReceive() returns right away before the complete response is received.
Am I understanding the logic correctly, or am I missing something, or using stream buffers incorrectly? Thanks.