FreeRTOS application stuck, SPI receive while

Hello.
I am working on a FreeRTOS based application, my code works fine but sometimes it stucks in a while loop and this while is in low level periphheral library of SPI which checks the register to confirm that the rx buffer (internal to PIC) is empty.

This SPI is attached to external flash and is used to archive system events, upon investigating this issue I realized that multiple tasks were using the archiving function, so my assumption was that due to the resource (external flash) shared among the task I am having this issue.

So in order to fix this issue I created a separate task for archiving and all of the tasks now fill the queue which then archiving task archive into the external flash. I was hoping that this modification will fix the issue but it didn’t.

A brief information of archiving task is; it uses a blocking call to write the contents of the queue in the external flash, when I give this task lowest priority my code gets stuck in the same while however when I gave it the highest priority code runs smoothly. So do you think the blocking call is posing the issue, what would be the ideal solution of this scenario.

Guidance is highly appreciated.

Regards
Ahmad Naeem

It is hard to make a guess without looking at the code, but the following explains the execution pattern in the 2 scenarios:

  1. When archiving task is lowest priority - it will run only after the queue is full and all the sender tasks block. As soon as it removes one item from the queue, one sender task will unblock and post again to the queue. Once the queue becomes full again, the archiving task will run.
  2. When archiving task is highest priority - it will run as soon as one sender has posted an item to the queue. Any sender will only get a chance to run when the archiving task blocks again on the empty queue.

Could this be related to the behavior you are observing?

Yes this is related to the current execution of the code, except that none of the tasks are synchronized, all tasks are implemented with temporal blocks including archiving task.

I did not get this. Do you mean to say that all tasks use vTaskDelay between consecutive executions of the loop?
Can the case be that the ‘sender’ tasks are generating data faster than the flash can handle?

Thank you for your message, yes you are correct every task uses vTaskDelay between execution of the loop.

No the flash can handle the data archived by the archiving task, as I have mentioned all of the archiving is done in only one task which means that external flash is not shared among different tasks. However there is another that just reads from the flash at random events, so do you think this may be the issue i.e. when archiving tasks is using all of the resources for flash at the same time this task is pre-empted by another task which needs to read from the flash. This means that external flash is still shared among two different tasks without synchronization. Do you think this is the issue?

It is hard to guess without being familiar with the Flash and MCU you are using. However, it should be easy to confirm by disabling the reading task.

Thanks for your message @aggarg , I am working on pic32mz2064. Thanks for your suggestion I have done the same (disabled the reading task) and then it works smoothly, should we jump to the conclusion that it is the problem of a resource being shared among two tasks without proper locking mechanism.
if this is so please suggest me a way to implement read task without having this problem.

Regards

We can be reasonably sure but to confirm, you should reach out to the vendor and ask them if their driver can be used from more than one task.

Can you implement a solution where each task posts an event to a queue which is serviced by one task? Something like " UDP/IP Stack: Solution" described on this page - Pend on multiple RTOS objects. Another alternative is to wrap your calls to Flash driver with a mutex so that only one task calls them at a time.

Thanks you for your valuable suggestions. The issue has been handled with the same approach you have mentioned i.e. with a queue.

Code is running smoothly now.

2 Likes