I’m really stuck and need some advice.
I have a shared resource and a management task that periodically allows access to this shared resource.
There can be multiple “user” tasks with random priorities that want access to the resource at random times. They all need to wait for the management task to allow them access. Once the management task allows access, it needs to then ensure it waits for all “user” tasks to do their thing before the management task disallows access again.
I have this implemented with and event group where all “user” tasks begin waiting for an even. Sometime later, the management task pulses the event, therefor unblocking all tasks.
xEventGroupSync(UserEvt, EVT_GO, EVT_GO, 0);
Note 1: In my case, I don’t care that all tasks unblock at the same time, the shared resource is fine with this.
Note 2: If a new task shows up and waits on the event 1 cpu cycle after the sync, it is forced to wait until next time and this is perfectly acceptable.
Note 3: I’m using Sync instead of Set, because if the management task simply “sets” the event, there may be multiple user tasks with higher priorities that just take over the CPU and the management task never gets the chance the “clear” the event.
The problem is that the management task has to wait for all “user” tasks to complete and there is no way of knowing how many were unblocked. I have tried all sorts of solutions like counting semaphores, event groups, queues, suspending tasks, and all are plagued with race conditions.
Is there a clean and elegant way this can be done that will not depend on task priorities? I’m out of ideas, my best bet for this would be to modify the kernel and make Sync return how many task were unblocked and I really don’t want to do that.