I am not sure I completely follow it. It seems that the word mutex has been used to describe multiple things, so lets disambiguate to ensure that we are talking about the same things-
Mutex - A FreeRTOS object which is created using xSemaphoreCreateMutex API.
Spin Lock - Used by the FreeRTOS implementation for cross core synchronization in short critical sections.
If you want to access a resource from more than one task, you would use a FreeRTOS mutex (which you create using xSemaphoreCreateMutex API) to ensure that the task that cannot get the mutex does not use CPU cycles. The granular locking proposal ensures that 2 tasks trying to grab 2 different FreeRTOS mutex do not compete for the same spin lock.
Are you saying that you will use spin lock directly in the application? If yes, why do you want to do that?
As @richard-damon explained already, you are looking at the single core code path and a single core port. You need to set configNUMBER_OF_CORES to the number of cores available on your platform and use a SMP port.
I could not find any example where portENTER_CRITICAL calls vTaskEnterCritical besides the template and I looked at all the GCC/ARM ports and none have this.
Even so, if vTaskEnterCritical is called this further maps on only 2 spinlocks through portGET_TASK_LOCK/portGET_ISR_LOCK thus same problem of user and kernel sharing 2 spinlocks iven if it makes no sense.
As far as I know, there aren’t many processors fully released with SMP ports. I suspect that will improve over time. Did any of the ports you looked at check for SMP mode? (I suspect not).
Yes, sort of by definition, the built in critical section is the one used by FreeRTOS for its kernel. Most applications sharing can be handled by a Mutex unless it needs protection for an ISR too. IF you don’t want to share the lock with the kernel, don’t us its lock, but make your own.
For some reason, most silicon vendors (NXP, AMD, etc.) choose to release their FreeRTOS ports (some of which support SMP) directly to their customers instead of publicly contributing them to the FreeRTOS repositories. Perhaps someone from the AWS team (@aggarg?) can comment further on why this is the case and what their preference is in this situation?
I suspect that the vendors don’t want to deal with upstreaming their changs and supporting customers publicly, but as far as I’m aware they also can’t prevent a 3rd party from contributing the vendor ports since this is permitted under the MIT license.
In the NXP CM7 and R55 ports, portENTER_CRITICAL() does indeed call vTaskEnterCritical().
FreeRTOS is MIT licensed and we do not prevent anyone from distributing it. If you got a FreeRTOS port from your vendor, you can reach out to them for support.
I’d still suggest to check with the vendor before doing that.
The problem is that this interface isn’t IN FreeRTOS itself, but in the port layer. A given port needs to supply the two locks, but not some generic method to create other locks, in part because it may not be possible on a given set of hardware to generate arbitrary versions of the locks.
The SMP version is fairly new, and largely supported with third-party ports. Perhaps as the technology matures, minimum expected capabilities will be revealed, and a generic API created. For now, you need to look at how the given port works, and use whatever API it provides.
Any SMP OS should provide an api to work with spinlocks, GetSpinLock(), ReleaseSpinLock(), TryGetSpinLock() … something like this.
From HW point of view there is no SMP on those cores if there is not hw support for spinlocks.
In the case of ARM we’re talking about Load/Store Exclusive, on other architectures some specific peripherals to implement the exclusive lock.
Thus, if the port doesn’t provide the spinlock hw feature at all then there is no SMP on that port.
If there is a limited number of spinlocks, ok, the API that “creates” the spinlocks with throw an error when the limit is reached.
On the ARM architecture if the LDRX/STRX are supported the number of spinlocks is limited by the amount of memory available.
And my point is that it becomes the ports jb of providing the API, not the kernal. Remember, we are in the first version of FreeRTOS that has had SMP capabilities on the trunk, and thus its SMP API is still evolving.
Yes, ideally, an generic API will develop to allow portable use of spinlocks, but until there is some experience on what the boundries for them would be, it is hard to design. Once and API is created, then it will just exclude processors that can’t support that particular API. Since one of the goals for FreeRTOS is wide processor support, it makes sense to wait a bit to lock down that API.
Perhaps the way to look at it is that FreeRTOS is not a “SMP OS” but a “Real Time OS” that supports some SMP features. It should be noted that adding SMP to an OS tends to increase uncertanty of timing, and thus impacts the real time nature of the OS, and spin-waiting, as spin-locks naturally do, goes against the basic nature of real-time design, you want to be able to bound the time of such spins.
Why do you want to use spin locks in your application? You earlier said that you want to use it to share resources among tasks running on different cores and as I replied before, you should use mutex for that. Would you please share why do you want to use spin locks?
Not sure what mutex you recommended before, is this one working on SMP ?
My understanding is that a spinlock can be used to implement a mutex that works on SMP, do we have this in FreeRTOS ?
In other words, the application DOES NOT need to use spin locks to protect a resource shared among multiple tasks. The application should use FreeRTOS mutex. The FreeRTOS mutex implementation internally using spin locks to ensure cross core synchronization whenever needed.
here is the call path for the pi-pico smp port.
Same problem, we end up again on the same 2 spinlocks.
Even if xSemaphore can take specific parameters from taskENTER_CRITICAL and below there is no parameter to differentiate, and they end on the same 2 spinlocks.
Which problem are you talking about? Are you facing any functional issue i.e. mutex not behaving as expected? In other words, are tasks running on different cores able to enter the code section protected by mutex simultaneously? Or are you trying to point out the lock contention? If later, this PR is addressing it - Feature/smp granular locks v4 by sudeep-mohanty · Pull Request #1154 · FreeRTOS/FreeRTOS-Kernel · GitHub.