Mutex-protected Shared Stack

anonymous wrote on Tuesday, June 05, 2012:

I’m wondering if this is a crazy idea in a “just crazy enough to make sense” or “you’re insane” way. I have a particularly stack-intensive API (for a filesystem). There are multiple tasks that make use of it, and it’s not feasible to add to each task’s stack allocation. Additionally, each call needs to block the calling task until completion.
My current design involves a gatekeeper task around the API with a mutex protecting an entry and exit semaphore. The mutex serializes access to the API and adds the blocking capability for calls when the API is in use, and there is a semaphore to wake up the API task and a semaphore for the API task to wake up the calling task when the call has finished. This design works, but it breaks the priority inheritence mechanism of mutexes since the task holding the mutex is then waiting on another resource. The API task is unaware of any changes in the calling task’s priority.

I had an idea recently to replace this entire mechanism with something similar. By using a shared stack, execution of the API always occurs in the calling task’s context (thereby allowing the mutex’s priority inheritence to function), but uses a single stack for the calls.

I believe this would require some additions to the RTOS code (since TCBs are private). Essentially, upon entering the API, it would call a function to acquire a lock protecting the stack, and then perform a routine to save the current stack pointer as the first item on the shared stack, and then update the stack pointer and TCB (saving old values on the new shared stack). After this, execution would then proceed into the stack-heavy API. Upon completion, just the opposite would occur: pop the old stack pointer and TCB stack info into the correct locations and resume execution.

Some care would need to be taken in making this transition, as any stack variables in the function would lose their value, so wrappers may be required. Additionally, I’ve only been considering how this would work on a Cortex-M (though, I believe it would work on an ARM7/9, as well), so it may not apply to other architectures.

So, am I crazy for dreaming this up?

rtel wrote on Tuesday, June 05, 2012:

Not crazy, I would say.  This is exactly how interrupt nesting works on the Cortex-M3 already, but it is done in hardware.  It means that each task does not need to allocate a stack large enough for interrupts to nest to their potential maximum depth.  Different scenario, but the same problem you describe.  If you look at the port layer for the PIC32/MIPS port you will see that this is done in software when an interrupt is first taken - the software manually switches stacks to an interrupt stack (to save task stack space) because on that core it does not happen automatically in the hardware.

Regards.