If task A is writing to a queue with portMAX_DELAY as the timeout, and the task blocks because the queue is full, what happens when task B deletes the queue with vQueueDelete()?
I probably missed something looking through the source code, but I would think if that xTasksWaitingToSend and xTasksWaitingToReceive lists should be evaluated and the corresponding tasks returned a non-pdPASS return value.
My scenario is that I have a uIP telnetd task that sends and receives via queues to the higher priority monitor task. If printf() from the monitor tasks blocks because the uIP buffers are full, and the socket is subsequently closed, it would appear that because the waiting task is never signaled the queue has been deleted, that the system will deadlock since the uIP task will never clean out the queue.
The monitor runs at a higher task so that the uIP task can be killed from the console. The console should always have control, regardless of what other tasks may have a seizure, which the exception of an exception that requires a reset.
I suppose it might be possible to have uIP close socket process repeatedly read the queue with a wait time of 0 until nothing is returned.
All this is sub-ideal. It’d be better to have the uIP telnetd task run its own instance of the command interpreter. This gets rather problematic with the protosockets implementation, and all it’s squirrely little rules that it seems to have about tasks blocking. And the newlib/syscalls implementation gets complimicated since it now has to become much more telnet aware. As it is, it appears the best way is to use a couple queues to pass commands and output back and forth, and have uIP use xQueuePeek() to avoid blocking.
I’m open to suggestions on this. I may be approaching this from a bad angle.