I have 3-4 tasks that each have a delay of different value.
For example, one task has a delay of 1mS, one at 7mS, one at 2mS.
Is it better to collapse these into a single task with a 1mS delay and then use a timer to run the longer delayed task code?
Specifically, it seems that this approach may be better to conserve stack RAM, rather than having multiple tasks with separate stacks, we could use a single task with a properly sized stack.
How long will the individual tasks claim the CPU when they are not delayed waiting? Is there a chance or starvation, convoy effects or lockout when you use a single task?
The delayed tasks are all low priority tasks that run at the same priority, checking for user input, performing screen updates, etc. The delays are just in place to reduce the MCU load, which is running at about 91% idle.
I have other tasks that are interrupt driven which are high priority.
Sorry for possibly being unclear, you did not respond to my question. If any of the “processing user input, perform screen updates etc” may use of a lot of CPU cycles (even if the overall load is negligible, we are talking peaks here), it will affect the timing of all timer serviced in your collated routine (and is also it trivially holds true for all FreeRTOS timers which is well documented). You may consider servicing all of those “dangerous” timers outside of a collated routine.
Sorry, I thought I was answering your question (in a roundabout way). Since the tasks all run at the same priority, some tasks do block while another task is running, but because it isn’t very timing critical, this is ok. The user doesn’t notice a 5mS lag in processing user input, for example.
The delays are calculated to be longer than the worst case execution of the thread before delaying again, so the 7mS delay task never takes longer than 7mS, etc. The tasks actually take much less time than the delay, for example, the 7mS delay was chosen to provide a reasonable screen refresh rate. The actual refresh is performed by DMA, so the task itself runs fairly quickly.
Timing aside, I guess the core of my question was about stack usage. Is it more stack/RAM/performance efficient to have one task with a larger stack vs many tasks with smaller stacks. Considering that it may be difficult to accurately size the stack, each stack has an amount of unused memory. If the only difference between the tasks is the delay timer, and they run at the same priority (without round-robin switching) it may be more efficient to have them in a single task.
You do save resources, but combining “unrelated” operations into a single task often adds some complexity to the task. That is ultimately a balance that needs to be looked at. If they never need to overlap operation, and all run fast enough to not interfere with the requirements of each other, having a base task function that cycles through calling each of the appropriate “sub-task” functions as needed can be a resource saver. The big question is if you are short on the memory resource to make it worth the (small) added complexity.
My 2c, I would just make 1 task responsible for this and use vTaskDelay to make the task sleep for the delays. This will place all of it on the same stack and make the behavior every 14ms when all 3 happens at the same time deterministic.
Just think about how many context switches you are causing, how will the scheduler accomplish what you want, and do what is best for your application. If you have a timer that will run in the timer task and require a context switch anyway you do not save anything over just having a single task and using vTaskDelay, actually you are probably making things worse (less efficient and less deterministic) by using the timer.
I highly recommend using something like Tracelyzer to visually inspect how the timing is working and what your efficiency is for various options you explore.
If the operations are just “Compute bound”, then there isn’t an issue, but if the some of the operations do things that require “I/O blocking”, where by making them separate tasks, there execution might happen in parallel, while being made a single task, it can’t, can make the difference between meeting the time requirements or not.
The once every 7ms task might even take more than 1 ms to execute, so combining them sequentially means the 1 ms task can’t possibly meet its once a milli-second specification if they are combined, but could if they were different tasks with the faster one at a higher.priority.
That is why, “It depends”. The factors that determine if you CAN combine them are beyond what was given. If the operation of each task is quick enough that the crude “big old loop” method of scheduling them works, then combing them can make sense. But it is also possible that trying to do a single task just can’t meet the requirements so you just can’t combine them.
All true and good advice, I took some queues from the descriptions above that the tasks generally completes quickly and are not that timing sensitive, thus my suggestion.
Also we did not really answer specifically the stack size question, generally the fewer stacks you have the less memory it will consume, so fewer is better in terms of memory use, and fewer context switches will mean more time to execute your actual code, but like @richard-damon says we are getting very deep into the “it depends” territory now.