[quote=“aggarg, post:9, topic:21405”]
A task received a byte and then it is sending this received byte to other task? You can consider buffering and sending only after x number of bytes are accumulated. It depends on your use case.
[/quote]
There are three use cases, one for each interface:
- I2C: messages may be of any size. Communication is between both smart processors and stupid chips. Blocking mode is most useful for chips, and while DMA and IRQ can be useful (treating outgoing and incoming differently), the ultimate limit is the bus speed. Queues may be used if the overall message size is known (using 32 byte packets over I2C to mimic the NRF24L01 packet mesh network). This is a different scenario and buffering can be used profitably. This driver is complicated by the need to address both I2C master and I2C scenarios. I2C slave is a processor, so packet communication methods are used.
- SPI: the SPI driver is somewhat more complex, allowing shifting the clock rate on a per-instance basis since SPI interfaces run only so fast. In addition, the hardware wants to control a CS line on each block of data. Certain chips (ILI9341 display driver for 320 * 240 TFT displays) cannot tolerate this, thus the driver must treat the CS differently. The write sequence for an ILI9341 chip must do the following sequence. CS Low/A0 low/send command/A0 high/send data/CS high. A0 is not important at this point. The driver must also handle a single sequence of CS Low/send command/send data/CS high. It does do that. In this use case, queues are not useful
- USART: Usart serial data is considered only on a byte basis for the use scenario, although all other methods (blocking, DMA, IRQ) can be used. Serial data here is mostly sent to a console, although packet
While all three drivers (I2C, SPI, USART) provide access to all four methods (blocking, queue, IRQ and DMA), not all use scenarios are practical. It should be noted that receive and transmit methods are allowed to be different, and a default SPI mode can be set as blocking (for display commands) and the transmit mode can be overridden to allow DMA transfer for block data to the display.
Unless you elaborate these situations, it is hard to comment. Are you looking for some help here?
I have one particularly questionable scenario, working with both receive and transmission for USART data. The code is below:
// uses RECEIVE and constantly calls it
// when character is received, the result is placed on a queue
// may need delay in here to allow task switching
void USART_RECEIVE_TASK(void const * argument)
{
HAL_USART* me;
uint8_t buf=0;
me = (HAL_USART*)argument;
// uint32_t result;
// ************************ initialization **********************************************************************
while (1)
{
HAL_UART_Receive(me->huart, &buf, 1, HAL_MAX_DELAY);
xQueueSend(me->receive_queue, &buf, portMAX_DELAY);
}
}
While a delay does not seem to be needed between the Receive and write to the queue, the runtime statistics show a lot of time spent in in (say) this receive task. I don’t think that’s good for overall system performance although it might be misleading information.
What I think I need help with is that there used to be a zero footprint method of going from an interrupt directly to a queue. It might not even work with this and the code may or may not have worked well. I can find no way of implementing this, since the source code was apparently removed sometime between 2016 and 2019.
So the question becomes: is there a better way to handle this USART receive task?