eTaskGetState behavior if an invalid handle is passed?

I am considering a scheme where an input task spawns one time command tasks that will execute and perish. Some of these tasks need to use a shared radio resource that I plan to also put in a task with a command queue.

I am considering having the commands contain pointers on where to stick the results and a task handle to notify on completion. In order to allow for the possibility of things getting lost in the outside world, the one time tasks need to expire and terminate, but the pointer and handle would still be on the queue and the command might not be recalled.

I want to use eTaskGetState to decide if the task still exists (and therefore the memory pointer for the data is still valid) before Notify is sent to resume it. If is deleted but not yet cleaned up I see how the kernel handles that but if it is already cleaned up and I have some old and outdated info, then what?

I don’t think that this is the best way since the task handle is just the pointer to it’s (allocated) TCB. You can’t really know if it’s valid (pointing to valid TCB) or not.
You’d need to wrap it into your own management structure somehow.
IMHO this design would be an overkill and pretty costly.

Thanks. The resource cost is not an issue but the potential instability and level of effort to self manage tasks would be.

You could queue the task name and lookup the name to see if the task still exists.
The other option would be to add a function that walks all the lists a task could be on to see if it still exists, that would need to use the additional code to task.c feature.

Neither of these is foolproof unless done in a critical section, as a higher priority task could still delete it between checking and using.

Thanks. Passing the task name instead of the handle might work, but I intended to spawn a replica of a task using the same code. I could have a base name plus a numeric index do disambiguate them.

At least until I try to throw TCP/IP and some protocols on it I think the system will live in idle task most of the time. There is a processor in the RFID module doing a lot of the heavy lifting (which is what all these commands need to manage and wait for) and another processor doing RF noise analysis for other purposes outside the RFID band.

The only way I planned for the task to be deleted was by itself. I can keep the one time tasks at lower or equal priority over the one doing the work.

Not a direct answer to your question about task handles, but I wonder if spawning a task per command is necessary in the first place. I never feel comfortable with patterns that create and delete tasks at run time. Can you have the commands all run in the same thread? Have the thread block on a queue, send the command you want to run to the queue, have the task process the command and then go back to block on the queue again?

The system has a resource (RFID module) that is used by multiple processes internal to the system (autonomous measurements) and external to the system (from multiple communications sources). I want to consolidate all of the places that currently use a mutex to arbitrate the RFID but duplicate how it is used into one task and have all the other tasks feed it requests.

I would have one queue for for the lower priority autonomous tasks and (at least) one for the externally sourced commands.

The application is a redo of a system that has worked since 2011 with a smaller processor, a different wireless sensor type and coprocessor, etc. In that system commands to the radio posted to a queue and had routing info. Incoming commands were parsed and routed, then just ‘let go of’. Responses were parsed for any internally useful info and then transmitted based on the routing info that followed the command. Simplistically, serial RX --> queue --> command parser --> queue --> radio handler --> queue --> response parser --> queue --> serial TX.

The new system has many command sources (the old system had two serial and an internal command sources). Many commands need never see the radio. Some need the response to do their next thing, etc. My thought - not fully played through - had been that the command parser might spawn a transient task that specifically handled command A, etc. It would be created, it would create temporary data space, it would send a command and then wait for a Notify or a timeout. if it timed out it could send the outside source a waiting message periodically or a timeout fail response. If it timed out or if it succeeded and sent its result, it would no longer be needed and would terminate.

If there were not enough resources to spawn a new task then the command parser would fail the command as ‘busy’.

The perceived advantage (and it is very untested) is that the command parser would not get bogged down executing commands sequentially and that the individual tasks could execute a series of radio operations in an encapsulated way. While that was happening, the command parser could execute (rather could start other tasks to execute) other commands of similar complexity.

I can see where task creation and deletion is going to put a strain on the kernel and where there is always a risk of the deleted tasks not freeing up memory. I can also see where there is a potential with more tasks to have more likelihood of priority clashes. Finally for simple commands the older approach of open loop queued tasks is more streamlined. The perceived advantages are when a series of radio functions may be needed to execute a command.

I am curious why you are hesitant about transient tasks.

If tasks only delete themselves, then only Idle will actually delete the task, so if you operation is above that it will be safe.

If the task might die and be respawned, then even the address being valid may not be a perfect test, as the new task might well be created at the same TCB address.

As Richard Barry says, a task that repeatedly dies and is respawn, can often be replaced with a task that just blocks until it has the next thing to do.