richarddamon wrote on Thursday, April 11, 2019:
Perhaps one thing that muddies the picture here is that I am not sure you are keeping some of the assumptions that at least I see as part of the fundamental model in using FreeRTOS. One key aspect is address space. FreeRTOS has an implied very simple address mapping, Logical Address Space in a task = Physical Memory Address Space (i.e. there is no MMU remapping of addresses). Any task can give any other task (or ISR) and address, and it can access that location without any overhead of re-mapping the address to its address space. There are some restricted task (that are sort of second class citizens) that might not be able to do some access to that location, but its address is always the same for every task that wants to use it, and no special function needs to be called to get such a chunk of memory. One thing this implies is that things like Thread Local Storage (or Core Local Storage) isn’t done by address remapping, but any routine that wants to access TLS can’t just assume some fixed address that will change memory mapping, but need to access a pointer in the Task Control Block. This makes TLS access a bit more awkward, but says that task switching can be fast and efficient. One implication of this is that if multiple tasks share the same base code, they share ‘globals’, and if they need private copies of data, they need to refer to the TCB to get the address of their private data, or keep a pointer on the stack.
Part of this assumption is rooted in the fact that FreeRTOS runs on many processors, many which don’t have a MMU, so it can’t assume the ability to remap addresses to provide TLS at a common address, and the optional restricted tasks only need a MPU, not a MMU so it would be possible to create a MPU port for a processor without a real MPU that work just the same, except that the restricted tasks aren’t really restricted.
If you keep that rule for a multicore version, then if all the cores are running off of the same copy of code, and all use the same address space, so to get to the core specific data structures, they need to get their core-id and select the right block for it. This means it doesn’t matter if there is a share cache or distint caches for the cores, as address X is address X. Perhaps it makes sense that a multi-core varient would assume a MMU and use it to provide some limited Core Local Storage to simpify some operations, but that starts to depart from some of the core principles behind FreeRTOS, but some of that happens anyway in a multicore system.
I think here the distintion between SMP and AMP (with BMP somewhat in the middle) is that the AMP model puts a FreeRTOS ‘system’ on a single core, and the primary interactions are between other tasks on that core, using resources that are basically dedicated to that core. There may be stuff going on in other cores, but that ‘Them’ and not ‘Us’, and communication to them is done differently, perhaps even through a different API, or at the least the code behaves someone differently when doing so. That is the Asymetric part.
In SMP, the whole system works together and you don’t really care what core a given task might be on, at the task level you just talk to it through the basic services and things happen. There isn’t an Us vs Them on cores and it is all just We. Tasks might be locked to specific cores for efficentcies or to promote scheduling but the basics of inter task communication don’t assume that. This is ‘Symmetric’.
In between is the Bound Multiprocessing, where tasks don’t move core to core, and maybe you bind a group of them together on a given core to have a more efficent inter-task communication. The OS still deals with things multi-core with perhaps a bit of a mix of AMP and SMP techniques within it.