Interaction/Deferring Strategies for Event-Driven Tasks?

Hey there,

Not sure if this is the right place to ask, but do you all have suggestions on how to implement the interactions between event-driven tasks where one of them needs to focus on handling a specific event type at certain times?

The setup:
I have three tasks as described below.

  • App1 - same priority as App2. Blocks on an event queue (Queue1) and has a state machine that will handle complete UART messages. Can also send out messages over UART.
  • UART Message Parser - higher priority than App1/App2. Processes the raw bytes in a UART RX buffer, determining if they form a complete message. If so, notify App1 about the details by pushing to Queue1.
  • App2 - same priority as App1. Blocks on an event queue (Queue2) and handles interactions with external entities. Interacts with App1 by pushing to Queue1.

The challenge:
At certain times, App1 needs to enter a state (call it the critical state) where it only cares about handling the events from UART Message Parser, as it is completing a multi-packet UART transaction. That said, the requests made by App2 are still valid, so if possible I would like to postpone the handling until the UART transaction is complete.

The potential solutions I could think of:

  1. Single queue on App1:
    App2 still pushes to App1’s queue as needed. If App1 happens to be in the critical state, App1 simply drops the event and notifies App2 to try again later. This might be the most straightforward even though it doesn’t postpone the handling.

  2. Two queues on App1 (Queue1 and DelayedQueue1):
    App2 and UART Message Parser always push to Queue1. When App1 is in the critical state and sees event from App2, it pushes the event to a separate DelayedQueue1 and moves on. Contents within DelayedQueue1 will be evaluated once the critical state ends. After all items within DelayedQueue1 is processed, App1 returns to normal operation and handles only Queue1. If something within DelayedQueue1 requires App1 to reenter critical state, it shall do so and handle the rest of DelayedQueue1 later (as a result during the critical state DelayedQueue1 may continue to grow).

Would love to know your thoughts on this! Thanks in advance for your time.

Both the solutions you suggested should work. Here are couple of other questions/suggestions:

  1. Is it possible to have a separate task for handling these critical messages?
  2. How about sending these critical messages to the front of the queue using xQueueSendToFront?
1 Like

Thanks so much for the feedback!

Regarding the questions/suggestions:

  1. Is it possible to have a separate task for handling these critical messages?
    It would be nice to have a separate task, but the target application has quite limited RAM, so I think having a secondary queue might consume less resources? I understand it also depends on the actual implementation though.
  2. How about sending these critical messages to the front of the queue using xQueueSendToFront?
    Since the UART messages are not strictly request-response type, it is possible to get multiple complete messages in a row, so sending to the front of queue might mess up the order.

Without knowing the full intention behind the design I cannot say for sure if this is appropriate, but I think another viable option would be to have one queue for the UART messages and one queue for messages from App2. This achieves separation of concerns between App1 and App2, since App2 does not need to know if App1 is in the critical state or not - it will always put messages in the same queue.

Obviously this does not work if you need to maintain ordering between the UART messages and the messages from App2.

Another idea would be to continue with the single-queue approach, but have a ā€œbacklogā€ FIFO inside of App1. If App1 is in the critical state, and it sees a non-critical message, it inserts the message into the backlog FIFO. Once it exits the critical state, it can pop messages out of the backlog and process them.

1 Like

Appreciate the input @dykeagdrs! With the two queues approach you first mentioned, would that mean App1 at normal operation will monitor both queues, then once it needs to enter critical state, it would only pay attention to the queue that stores the UART messages?

As for a way to monitor both queues, perhaps App1 can do one of the following:

  1. Call xQueueReceive on queue for App2 with 0 blocking time, then if nothing is received call xQueueReceive on queue for the UART message with non-zero blocking time. If nothing is received then rinse and repeat to handle the events on both queues. When App1 needs to enter critical state, it exclusively waits on the queue for UART messages until it can exit critical state.

  2. This might be tricky to implement, but use a queue set to monitor both queues. When App1 is about to enter critical state, it needs to remove one of the queues from the set (the queue needs to be empty before it can be removed though). This approach is probably not as effective as the first one.

Yeah I agree this is really an open-ended discussion and there are various ways to solve the problem. Just wanted to brainstorm some potential solutions and evaluate their impact. Will mark the post resolved soon. Thanks again for your help!

With the two queues approach you first mentioned, would that mean App1 at normal operation will monitor both queues, then once it needs to enter critical state, it would only pay attention to the queue that stores the UART messages?

Yes, that is what I was thinking.

Option 1 is viable if latency on the App2 queue is not a concern. You can trade latency for CPU time by making the block time smaller, but you are limited by your tick rate for the minimum delay there. You would also have a forced delay between each message from App2, unless you include special logic to check the queue again after processing a message. This may be acceptable for your situation though.

I have another idea for option 2 - but it only works if you enter the ā€˜critical state’ by receiving a message via the UART queue. You would essentially have 2 states - one for the critical section and one for normal operation. When you receive a message putting you in the critical state, you do NOT go back to blocking on the queue set - you block on the single queue instead.
Perhaps this is more clear in psudocode:

App1Task() {
    while(true) {
           if (in critical state) { 
              BlockOnUARTqueue();
           } else {
              BlockOnQueueSet();
           }
           <process messages>
    }
}

Although now that I think about it… I’m not sure if FreeRTOS allows you to block on a queue that is already part of a queue set. I’ve never actually used queue sets myself.

If you want to go the two queue route, I think the choice on implementation comes down to how much latency you can accept from App2’s queue - Option 1 is simpler (and thus less likely to go wrong), but incurs overhead and latency. Option 2 is probably more efficient in terms of CPU cycles and has lower latency, but its more complex and I can see it being a pain to debug if the queue sets don’t let you access a queue from outside the set.

If its not a problem for App1 to extract and save non-priority messages during the critical section, I think I would personally go with the backlog FIFO approach since you don’t need to mess with queue sets. But thats just my opinion!

–
Alex

Understood. Yeah looking at the documentation under Pend (or block) on multiple RTOS queues and semaphores in a set

A receive (in the case of a queue) or take (in the case of a semaphore) operation must not be performed on a member of a queue set unless a call to xQueueSelectFromSet() has first returned a handle to that set member.

So chances are that’s probably not allowed. Can probably try it out and see what happens :stuck_out_tongue:

I agree: using the backlog FIFO approach also maintains the order of events and should be easier to debug/test. Thanks again for your time!

You CAN block on the original queue, but you do need to take an item off the QueueSet too. The issue is that if it contains the handle foe Queue 2, you need to remember that you need to replace one return of Queue 1 with a fetch from Queue 2 instead.

Hey @richard-damon, thanks for the feedback! Do you mind using the APIs to elaborate what you meant please? Based on the ones I could find, I am not sure how that can be achieved.

The API doesn’t document how to do this, because it is based on the internals of how QueueSets work. When you create a QueueSet, a special Queue is created to hold Queue Handles. When an item is put into a Queue in the QueueSet, the handle of the Queue is put into that special Queue.
When you want to get an item normally from the QueueSet, you caall xQueueSelectFromSet, which will read (and block on) that special Queue, and return the handle of a Queue in the set with data (the one that has the oldest data) and you then read the data from that Queue. If you follow that procedure, then the number of items in that special queue will always be equal to the sum of the number of items in the Queues in the Set. THAT is the key attribute that needs to be kept to make things work. Also, when you follow this, the number of items in
each of the Queue will be equal to then number of times that handle is stored in the special queue.

There is nothing in the code that prevents you from reading from any of those Queues without getting its handle from the QueueSet, but to keep that count balence, you need to take an item off the special queue with a call to xQueueSelectFromSet. The problem now is that if the handle gotten wasn’t from the Queue you read from, the count of handles and items on the queues is out of balance, so you need to do something to get that balance back. This could be done by after you are done with the ā€œcritical message loopā€, you remember how many serial message you took off when the set said App2 messages, and make that many Serial messages need to be though of as App2 messages. Another method is to just ignore the handle the xQueueSelectFromSet return, but just count on the property that the number of messages in the special queue equals the sum of the messages in the Queues in the Set, and just check with 0 tick reads which Queue might have data. This breaks the normal promise that you will handle the oldest message first, but maybe that isn’t what is important to you.

1 Like

Ah I see. Thanks again for clarifying!