static and volatile declaration of xSemaphoreCreateMutex()

michaeln32 wrote on Sunday, June 03, 2018:

Hi

static volatile SemaphoreHandle_t SEMA = NULL;

SEMA = xSemaphoreCreateMutex();

Can SEMA declared static ?

Can SEMA declared volatile ?

Thank you

Michael

hs2sf wrote on Sunday, June 03, 2018:

Sure, the handle can be static. It’s just a (pointer) variable.
Alternatively you could use xSemaphoreCreateMutexStatic defining the mutex itself with static resp. provided storage.
I don’t understand your intention of declaring a handle volatile. It’s usually not needed.

michaeln32 wrote on Sunday, June 03, 2018:

Thank you HS2.

I want to define SEMA static becase I want to use this semaphore only in one file (the file it is defined).

I want to define SEMA volatile becase I do not want the CPU do make optimization on that semaphore.

Can you please give your opinion on the thinks above ?

thank you

richard_damon wrote on Sunday, June 03, 2018:

The volatile protects against optimization of changes to the SEMA variable, which likely should only change once, when the semaphore is created. If it is created after the scheduler is started, and a task other than the one that creates the semaphore is looking at the handle to see if the semaphore has been created before it has been created, then you want the volatile.

I personally try to create all my semaphores (and similar objects) before I start the scheduler, or at least before any task that want to use them to avoid the need to check and wait/skip on their existance, at which point I don’t need the volatile, since it never changes on anything that can’t see the creation directly.

hs2sf wrote on Sunday, June 03, 2018:

Declaring a variable volatile tells the compiler, that this variable might get changed from outside the current flow of execution as visible to the compiler.
This is especially needed for memory mapped hardware (e.g. registers) which might get changed without notice of the software but by external hardware. Hence the compiler must generate code to do a full ‘memory’ i.e. hardware access (to a register) and shouldn’t optimize this access by caching it’s value into a processor register and just use it later on.
Don’t use volatile for inter-task synchronisation as it’s simply not intended for that purpose !
As Richard said, ensure that globally used or shared resources like semaphores used by different tasks or a task and an ISR are created before getting used.

michaeln32 wrote on Sunday, June 03, 2018:

Thank you Richard

Is it right to define SEMA static like this :

static SemaphoreHandle_t SEMA = NULL;/define SEMA in global scope. Not in function or task/

Can you see any problems with that ?

Thanks

hs2sf wrote on Sunday, June 03, 2018:

This is fine. No problems with that.
I’d take care to actually create it e.g. during init phase before using it in operational code.

michaeln32 wrote on Sunday, June 03, 2018:

Thank you HS2

richard_damon wrote on Sunday, June 03, 2018:

I would disagree with ‘Don’t use volatile for inter-task synchronisation as it’s simply not intended for that purpose!’. In the Standard, volatile means the variable needs to be treated as if it might change by means outside of the local code, which would include another task. so it can be useful for this. (on multi-core processors, which aren’t the targer for FreeRTOS it might not be sufficient, as you may also need some hardware memory barriers, but those aren’t needed for a single core processor). One common use for something like this would be to set an ‘alert’ variable for a task, that it periodially checks while it is doing some long operaton to see if it needs to do something else. This can be a very light weight test. You do need to be a bit careful that the other code has ‘observable’ behavior to avoid some possible optimizations, but in ‘real’ code I find this isn’t normally a problem.

hs2sf wrote on Sunday, June 03, 2018:

Sure, with a number of constraints you mentioned (single core, no caches ?, …) it is possible to (mis)use a volatile variable for this simple kind of IPC. However, even for this purpose on this kind of system I’d use an atomic variable. Because it’standard since a while and I prefer and recommend writing portable code. Which will still work properly even if FreeRTOS get’s ported to multicore processors or if one ports such code to an other system.
I just tried to give a more general, good advise to avoid a pretty common pitfall or misunderstanding of volatile

richard_damon wrote on Monday, June 04, 2018:

Yes, atomics are now the perfered method of this IF (and big if for these sort of machines) you have a modern compiler that supports it. (and single processor machines with caches will still work with just volatile for IPC, it is only multi-processor that gives issues, data caches actually give you more problems with hardware I/O, where you imply volatile works).

I suspect I am much more likely to want to port my FreeRTOS code to a compiler that doesn’t support atomics than to a machine where volatile doesn’t work (and the latter is much more likely to require a major restruturing anyway). I will note that the FreeRTOS code base includes work arounds for compilers that don’t fully support the C90 standard. This is why it invokes technically undefined behavior with violation of the One Definition Rule with handles being pointer to void in the user code but being pointer to structs in the kernal code, making almost every FreeRTOS system call a violation of that rule. This not only invokes technical undefined behavior, but increases the chance of error as user code can mix up handle usage.

rtel wrote on Monday, June 04, 2018:

the One Definition Rule with handles being pointer to void in the user
code but being pointer to structs in the kernal code, making almost
every FreeRTOS system call a violation of that rule. This not only
invokes technical undefined behavior, but increases the chance of error
as user code can mix up handle usage.

Actually…that is changing ;o) You will note the latest
event_groups.c and tasks.c in SVN no longer use void* for
EventGrouptHandle_t and TaskHandle_t. This is currently being tested as
last time we attempted this it broke several GNU based debuggers. If
all works ok then the same will be done for queue handles, etc.

hs2sf wrote on Tuesday, June 05, 2018:

data caches actually give you more problems with hardware I/O, where you imply volatile works

Sorry for nitpicking Richard :wink: But I didn’t imply ‘this works’, I just said it’ s needed.
In conclusion everything can be done if one is aware of all consequences (like you).
In my experience this is often not the case b/c it’s only possible with very good experience and knowledge.
Hence I got used to avoid giving expert hints (with maybe non-obvious constraints) in the first place.