Dynamic Prioritization

Can You picture this? Our tasks are usually waiting and served within microseconds. To priorize one over another doesn’t make any sense.

Good Luck!

Your questions are very hard.
What took too long is a braking-maneuver in a self-driving car. We know the software is fine, it was always working. The mistake must be in the kernel.
My question is, why take the complicated way if You know the easy one?
Okay, You’ve made it possible to priorize tasks and ISR-task-switching. My problem stays the same, i don’t have a driving-license and have to wait for my robo-taxi forever.
What You state is true, but not 100%. My combined operation can in fact save some cycles. Where is the border? How do You make a decision? It looks like my proposal is too expensive, i understand You would have to implement vSemaphoreTakeBlocking() for every platform You support. If so, this is no problem. But if You are looking for an investment in FreeRTOS that really makes sense, i can suggest You Dynamic Prioritization, i’m not using Your prioritization at all. I just can’t decide which task i am running is not important.

Thank You for Your reply, best regards!

They won’t wait just microseconds if you keep on disabling interrupts, since the only thing that can start a task at that sort of time scale IS an interrupt.

If you system is always so lightly loaded that you rarely have to worry about priorities, then my guess is you have spent too much on your processor.

If that really is true, why do you need to worry about “Dynamic Prioritization” if you claim your system never has to wait.

Which just goes against your previous statement. Apparently the “braking-maneuver” needs to be higher priority so it can interrupt a lower priority operation.

You claim your software is fine, but you also point out it does something wrong, and a problem that the kernel provides a solution for that you just refuse to use, yet you blame the kernel. It seems more that the problem is your software design methods, and not matching them to the tools you are using.

Talking about code that just blindly disables interrupts for apparently extended periods of time, and then complaining that a high priority event got delayed is signs of that problem.

Yes, PERHAPS your primitive can save a few cycles, but I am not really sure about that, and the operation, to be properly done isn’t trivial (and maybe not available on all machines) as what you really need to do is to have a task block in a way that when it unblocks it unblocks into a critical section, which may need two different ways to yield the processor.

This may well actually increase the cycles used in EVERY task switch, so not actually a performance gain.

I am not sure you even understand what you are talking about, since you began with this statement:

This is just false, as “PRIORITY” is a property assigned to a “TASK” when you create it, and not something that happens just because you are in a given “function”

If you want a given “function” to have a higher priority, just begin the function with a call to get and save the current priority, and then set it to the higher one, and then end with restoring the priority. (If that is what you are talking about).

1 Like

Please forgive me, You are completely wrong!

Imagine You were a programmer and would have to decide how important the task is You are writing! You just can’t. You don’t have a clue of any other task running. The whole system will hang. This should answer Your first question. It’s me, i just can’t. That’s why i suggest Dynamic Prioritization on a function level.

No, the breaking-maneuver doesn’t need higher priortity. It is depending on many other tasks. That’s the mistake. Every task running should be given the same priority.

It is not my software that is crashing, it is Teslas. They have about 300,000 lines C++ not working. If the mistake isn’t in the software, it obviously is in the kernel.

My code doesn’t blindly disable interrupts. I even have implemented a heap that guarantees constant low time allocation and deallocation. You can find it on GitHub.

I am sure it can save some cycles what doesn’t solve the problem. The problem is in Your API, letting programmers decide how important tasks are relative to each other.

The priority of a task cannot be assigned in the moment it is created, again, again, and again. Every task is accessing the kernel which is high priority. You can make it on a function-level without any problems. What’s right is that the priority has to be assigned to a task and can vary over time. vSemaphoreTakeBlocking() raises the priority, unblocking sets it back to normal.

Please excuse me, Your kernel looks like it was designed 40 years ago.

Best regards,

Sven Bieg

I wouldn’t have come to You with my problem if i didn’t have the solution.

You got it! :slightly_smiling_face:

Then I guess you don’t understand the power of requirements and design.

For this sort of system, SOMEONE needs to know what the system is supposed to be doing, and the timing requirements for it, and they need to assign the needed resources and priorities.

I will say that if you don’t know what else is running in the system, you should not be touching it if it is at all a critical system. PERIOD, unless someone that DOES has given your the requirements for your piece to fit into the system.

Note, it is clear you don’t understand what “priority” means, as the “kernel” isn’t a task, and thus doesn’t HAVE “Priority”, except for the small exception of the Timer Service task and the Idle task.

There are definite reasons and criteria used to assign different priorities to different tasks, but that needs a level of design that it just seems you do not understand.

I will point out that you still haven’t answered the question of how your vSemaphoreTakeBlocking() makes ANY significant improvement in timing, as all the possible delays causes by the two separate statements could have occurred with just minorly different order of events, so all it seems you are doing is creating race conditions.

Sorry, but that seems to be the facts.

If you want a fancier kernel, then there are plenty of others out there, feel free to use them.

You also may want to check your timeline, as 40 years ago the state of the art for kernels was quite different (C was just evolving as a language then, and not yet a standard).

2 Likes

I do understand the concept of priority-inheritance and i know it is working. It is my decision to leave task-priority normal and raise it on a fuction-level. So, i am only missing one function in FreeRTOS. I’m not using it, maybe some of Your customers understand my concept.
Okay, i guess it is 20 years. There was something on TV that NASA once had a problem without priority-inheritance.

There are less task-switches, saving some cycles. Please note that this also happens when the mutex is released, interrupts are enabled afterwards.

I know this is almost nothing, but vSemaphoreTakeBlocking() also has a symbolic character. The idea of a critical mutex is to hold it as short as possible.

The idea of task-prioritization was absolutely right, having one core and multiple tasks. Todays task area is different, we have to spread our work-load on multiple cores to get maximum performance.
Please imagine, what took one second back in the 90s now takes 1 ms! What took a minute now takes 60ms.
I found vTaskSuspendAll() when porting my heap to FreeRTOS, a critical mutex is the best option in this place. It is really easy to implement and possible on every platform.

That’s all,

Sven Bieg

That may be your decision, but it may (and likely will) not be a system designer’s decision.

You want a specific function set to be “strongly atomic,” ie not only serialized against other tasks, but running uniterrupted by any task activity. Your implementation to use vTaskSuspendAll()/vTaskResumeAll() may serve the purpose, but to be consequent, you may want to even disallow all interrupt activity during that time if you want fully predictable maximum bounds on the time it takes to pass through your function set. Thus, you will need a “hard interrupt disable/enable,” which is even more system invasive than the critical section (interrupt throughput is one of the most critical aspects of many embedded system designs).

The question is WHY? In a well designed system, the precentage of CPU time spent by application tasks in memory management is small (because those systems are written to do something else than allocate and deallocate memory blocks; they must DO something with that memory). So assume your tasks spend 10, at worst 20% of their computation time in the memory manager (and since your memory manager is designed to keep that time very short and efficient to begin with, that is probably already very highly estimated). That means that 80-90% of their computation time is used outside of your “hard mutex” protection, meaning the tasks are fully subjected to concurrency and competition for the CPU. Why bother to make the 10-20% predictably bounded if you still need to ensure that the remaining 80-90% still need to be coordinated to weigh off real time and throughput requirements?

As @richard-damon pointed out, a system designer/architect will need to make these decisions and will therefore need to take every computational path that each task will take into consideration. A “clean” malloc()/free() will be one of the least concerns in this. Since your memory manager already appears to put a (small) maximum boundary on each operation, an optimization that makes this boundary even smaller (at the expense of concurrency throughput, which may makes things worse than better as explained above) does not appear to make sense.

2 Likes

That’s right for a micro-controller, FreeRTOS is also targeting emebedded systems.

Yes, that’s true.

Because it is easy to predict what the application is doing. When i’m looking for a mistake, i don’t use a computer. I review my steps in my mind, less steps with my solution.

Like i said.

My heap won’t take 1 ms to complete, like an ISR. Interrupt requests are causing the same “problem”, the scheduler might get delayed 1ms. This is not really a problem.

vSempaphoreTakeFromISR() is a problem. The only way to make this possible is a spin-loop, keeping one core in a loop until the mutex is released. There is one good way to go here, as You can see in the standard-library. They are using a condition_variable, i’m calling it a signal. An ISR can signal a task, the task has access to the heap. If You would ask me, i would tell You to make vSemaphoreTakeFromISR() deprecated and suggest using a signal here. You know the equivalent in FreeRTOS? I don’t.

Please forgive me, i’m working in electric industry all day and don’t have much time! I’m going to illustrate Dynamic Prioritization on my GitHub-site. ISRs and the heap only take µs, while the scheduler is working at 100Hz / 10ms. The scheduler can get delayed acceptable.

I’m happy You are still here, have a good evening!

FreeRTOS is targeting MICRO based embedded systems. What is a “micro” is growing over time. and some people want to use it for more powerful processors, but at its core, it is aimed to be a low-resource consumption system for systems with limited resources.

What ISR is taking a millisecond to complete. That sounds like you are misusing your ISRs. Your heap blocking ISRs is just increasing the jitter to the ISRs which normally should be kept very small.

No, vSemaphoreTaskFromISR() just reports that the semaphore is already taken, and that the ISR can’t have the resource (if that is what the semaphore is doing). I will admit that it is a rarely used function by me, but then to me ISRs shouldn’t be managing resources, but handling on demand notifications and simple data transfers, and the larger operations deferred to high-priority tasks.

My guess is that your system doesn’t really have significant “Real Time” requirements, since you talk about delaying a scheduler operation isn’t really critical. For a real time system, when a request comes in (normally via an ISR), the time to activate the task responsible is very critical.

Note, while most of the system I work on have “tick” rates like your in the 100 Hz resolution, the expectation of priority task response to being activated is low (or sub) microseconds, interrupting what ever lower priority operation is happening, and immediately switching to it.

Every thing you seem to have described, and demonstrated doesn’t seem to support that sort of timing. If your heap blocks interrupts for even a moderate fraction of a millisecond, it causes the system to fail response time.

1 Like

vSemaphoreTakeBlocking() is it.

No, my ISR is triggering a task and returning. It is µs, below 1ms.

Sorry, i didn’t know. Thank You!

I’m not using it at all.

I didn’t consider this when writing my kernel. If it really was that critical i wouldn’t trigger a task and handle the interrupt in the ISR. Triggering a task and using the heap will take time, it’s not about µs here.

You are still wrong, sorry. Please don’t get me wrong, most of Your statements are absolutely right!

Thank You anyways!

Like RAc i’ve been studying Intels white-papers and was working with NASM 20 years ago. Now it is ARMv8 on GCC, a brilliant system. I think i do know what is going on under the hood. You underestiminate me.
My idea of pyramidal directories is about 20 years old, too. I have implemented it in C++ and designed a solid heap where fragmentation is irrelevant.

Hoping You get me soon. :wink:

So, why aren’t You using a simple mutex on the heap?

Then your system and design isn’t a REAL-TIME operating system. The whole concept of “Real-Time” is about deadlines, which is one reason dynamic memory is normally not used during the operation of the real-time system. My design criteria says that nothing that is real-time can be dependent on the acquisition of dynamic memory. Most of my memory usages are configured at system start up. Non-real-time segments might do minimal dynamic memory allocations, but only in sub-systems that can “fail” and report a task not done.

Who says I am not? The sample heaps implementations are not fixed and limited. The one issue with a mutex is you need to solve the chicken and egg problem, so it helps to have static allocation enable (which isn’t the default for historical reasons). I will note that my normal heap usage doesn’t use any of the built in FreeRTOS memory systems.

My best idea now is to just block the scheduler with a critical mutex.

Thank You, Happy Weekend!

There is no such thing as a critical mutex.

1 Like

Since all your tasks are of the same priority, al you need to do is configure with the Round Robin scheduling option disabled. The a task is only switch from if it blocks, or a higher priority task becomes ready, which won’t happen since they are all the same.

1 Like