I’ve studied Your Semaphore in the documentation. You have implemented a counter to provide simultanous reads. Two reading threads may block a thread trying to write. To make it safe, You need to prevent reading as soon as one thread is trying to write. You can do this with a linked list to all reading tasks.
Is there anything you want to ask or share with this forum?
To protect a resource in a multi-tasking environment, You should use task_handle_t as owner. For a read-lock, You can use a linked ist. If two ore more tasks are reading in loops, the counter may never go to zero, blocking a task trying to write.
You can find a simple mutex-implementation at GitHub - svenbieg/Scheduler: C++ Task-manager
It looks as if you need to implement a reader-writer lock. It is trivially true that the syncronization primitives (critical sections, semaphores and muteces) are unaware of application usages and thus are “symmetrical” between claimants.
This has been discussed many times before, and several solutions to implement compound syncronization primitives such as r/w locks have been proposed, you may want to query the forum for reader/writer lock.
Actually, to block all new readers just because you have a writter requesting isn’t necessarily the correct answer, as that could result in a priority inversion if the writter is low prioirty. A better solution (though harder to implement on top of the current FreeRTOS primatives) is to only block readers at or below the priority of the writing task from getting a read lock.
You also probably want that if a writer task gets blocked, you want to implement priority inheretance for any reader task at a lower priority then this. This goes beyond the ability of the FreeRTOS primatives as only the Mutex has priority inheretance, and it can only have.a single holder.
I have sketched out a possible way to do all of this, but haven’t had the need to fully implement it in FreeRTOS, but it appears there are the hooks available to do all of this with a new type of blocking primative.
It is really tricky, because i have to lock multiple resources at once. I first lock the console to get uninterrupted strings. Holding this lock, i have to lock the heap. I need separate linked lists for both muteces only to get text output. Priority inversion isn’t the case. Once a writing task owns the mutex, high priority readers should get prepended in the waiting list. I’m going to do some reviews on holidays.
For everybody who’s interested, my first driver UART is working a few commands.
Hopefully my publification is helpful for FreeRTOS.
Oh I see what you are getting at. You basically want a finer granularity r/w lock in which there are not only two levels of access priority (ie readers and writers) but multiple levels. That could be accomplished by an extension of a reader/writer lock.
Your approach is to couple the “application level priorities” with task priorities by modifying mutex semantics such that the mutex claim function does not maintain a fifo but a task priority sorted list. I would not do that because in the generic case that may lead to unexpected and hard to detect starvation issues.
By using a r/w extension, application writers are forced to pass an application level priority (eg writer/readerhipri/readerlowpri) to the claim function and thus explicify the control flow. Note that it would be trivially possible to map a task priority to such a limited application level priority range, but be aware that priority inheritance again may become an issue (ie if a task already has inherited a higher run time priority when asking for the extended r/w lock, the application level priority may be different than expected).
As a side note, I do not see what semaphores have to do with all of that.
I have two priorities for application-development, high and normal. High priortiy is for service-tasks only, handling interrupt-signals. This makes implementation difficult enough. My only example is my serial device. An interrupt is triggered, the ISR copies the data into a ring-buffer and triggers a signal. The service-task is woken up as soon as possible, allocates heap and empties the ring-buffer. This high-priority service-task does nothing else and is kept running by the scheduler. Your application may block the heap, so the priority is raised with a critical mutex. Prepending the service-task in the wait-list and preventing a task-switch on the heap is all i can do.
Semaphore is here to make a read-lock possible, i guess. Counting reading tasks is not enough.
I believe your use case can be implemented with a few extensions to an r/w lock. Your service task acts as the writer, and all other tasks requesting the heap as readers, but they serialize access to the heap with a “local” mutex.
Now of course your concern is that multiple readers may have lined up in the waiting list for the heap mutex when your service task requests write access. The “normal” r/w lock would now prevent new readers from entering the lock but would work off all readers before passing on the lock to the writer.
That issue can be solved by immediately returning from the “local reader mutex” when a pending write has been determined. Note that this knowledge must be known to the r/w lock anyways in the r/w lock implementation because (as noted above) it must lock out new readers once a writer has claimed access, so all the extension would do is not only keep out new readers but flush all old readers.
Well, that’s wrong in origin. The only read-functions on the heap are heap_available() and heap_info(). All tasks allocating and deleting need write-locks and block service-tasks.
Also, You can not give a mutex once taken. This is not even neccessary, because Your heap has constant low time functionality and can provide a few a-/deallocations before Your big enough ring-buffer overruns.
I’m not sure if we are in sync with our understanding of your use case.
The R/W solution I sketch will ensure that
a) All tasks will be serialized with respect to the resource to be protected (in your case your heap manager)
b) the service task will have the highest priority in that it has to wait for at most the task holding the lock, not other tasks that may have requested access to the resource in the meantime.
Isn’t that what your use case asks for?
That’s right. Your application also has the highest priority for the time on the heap. This really seems to have been a problem until i came up with my two serial pyramidal directories. You have guaranteed low time functionality now. It’s a present of mine, two seconds of calculation 20 years ago. Three weeks of implementation last year. It is the way it is, malloc() has a size and gives a position.
You can easily test Your ring-buffer by wasting Your heap. Just allocating millions of integers, deleting every second and allocating long ints should do the trick.
FreeRTOS provides OS primitives which you can use to build additional mechanisms on top of. As suggested by several others on this thread what you’re describing sounds like a flavor of a reader writer lock with a prioritized queueing mechanism (can be a linked like like you mentioned).
The basic FreeRTOS semaphore will not meet your use case out of the box but can be combined with other code to provide the behavior you see. Creatin another primitive (Maybe ReaderWriterPriorityLock) on top of FreeRTOS is your best bet. If you need recommendations on how to do this I’m happy to help.
In other words you think that it is ok for the high priority readers to be blocked by a mid priority task that uses up the CPU so the writer task doesn’t get to run.
Trying to say you don’t allow this to happen is just putting blinders on.
The standard work around to this is that when the higher priority reader gets put on the list, you temporarily raise the priority of the writer, just as you want to raise the priority of a low priority reader if a high priority writer tries to take the lock.
That’s wrong in origin, too.
I have one writing owner, and a linked list of waiters. The linked list has two dimensions. Reading/sharing tasks are appended in parallel, while writing tasks are appended in serial. Priority tasks should be prepended, if i did everything right. I’m going to do some reviews, please excuse my mistakes! I even know there is something wrong. I’m limited to 30 holidays a year working in electric industry. This is why i am trying to keep everything as simple as possible. Only two priorities, nothing users have to deal with. Also, i want to be able to read my code next year. My linked list was simple, i split it yesterday because it was wrong. This is work in progress to get my Clusters into flash-memory. An example with an improvement on the heap to get reliable C++ for flash-memory, without any semaphore by the way.
It other words, you aren’t working on a general problem but something very application specific, and just misusing terminology.
I don’t get You sir, why misusing? All i want is a mutex for reading and writing a shared resource. Maybe You have an example where a semaphore is useful? Like i said, i found counting is not enough. I need two linked lists for this, how can You make it without this information. SemaphoreTake increases the counter, SemaphoreGive decreases it. But i need to switch to the waiting task on release if any. I think Semaphore is the right direction to implement a read-lock, for real-time operation. I don’t expect anybody to work on this for free, i know it took me weeks to write my scheduler and it is still not working. You seem to get me wrong every time, i’m trying to help. I’m sharing what i’ve found with my friends, having the same problem. It is just impossible to get a reliable read-lock with FreeRTOS at the moment. At least i needed mutex_lock() and mutex_lock_shared() for my wrapper-classes. I didn’t make it on Windows, didn’t make it on Linux and didn’t make it on FreeRTOS. It’s not to blame anybody, it takes very long to learn C++, and weeks to implement a read-lock.
You see, i find my Illustration helpful. Maybe i have locked waiting tasks, what is wrong. My critical mutex won’t work, but i already know how to fix it.
