vApplicationGetTimerTaskMemory type bug

Since I’ve been asking about missing prototypes let me also bring up a bug in the types used. The prototype for vApplicationGetTimerTaskMemory appears to be

void vApplicationGetTimerTaskMemory( StaticTask_t **ppxTimerTaskTCBBuffer, StackType_t **ppxTimerTaskStackBuffer, uint32_t *pulTimerTaskStackSize );

That gives the type for the stack size as explicitly a unit32_t (already that seems odd). It gets worse though when looking at the example implementations which use this line

*pulTimerTaskStackSize = configTIMER_TASK_STACK_DEPTH;

When running static analysis this flags a type conflict which led me to start digging. In the ARM M4F port configTIMER_TASK_STACK_DEPTH is defined as configMINIMAL_STACK_SIZE which seems eminently reasonable but upon searching for it I find it declared as (unsigned short)120 which is a little odd but not entirely unreasonable for a small memory setup. Now the stack size is used in xTaskCreate where it is typed as configSTACK_DEPTH_TYPE so this is obviously what all other instances of stack size should use (although the examples I’m working from clearly do not).

So tracing back further configSTACK_DEPTH_TYPE defaults to uint16_t so now we have 3 different types for stack sizes. I’m actually a little afraid of searching further.

Robert

The documented signature for vApplicationGetTimerTaskMemory(), which uses a uint32_t as the stack size, is shown here: https://www.freertos.org/a00110.html#configSUPPORT_STATIC_ALLOCATION

It matches the prototype and usage in the code. The uint32_t is then passed as the third parameter to xTaskCreateStatic(), which is also a uint32_t. Inside xTaskCreateStatic() the uint32_t is passed to prvInitialiseNewTask(), which also expects a uint32_t.

configTIMER_TASK_STACK_DEPTH is a constant provided by the application writer, not part of the kernel’s implementation. Various different demos will use various different types as different architectures have different word sizes. The demo applications are not part of the kernel implementation, so if the definition is incorrect in a demo application, or in real code, it can be edited in the FreeRTOSConfig.h file without changing the kernel implementation. Likewise configMINIMAL_STACK_SIZE, any macros that start ‘config’ are intended to be set by the user.

That prototype is
void vApplicationGetIdleTaskMemory( StaticTask_t **ppxIdleTaskTCBBuffer,
StackType_t **ppxIdleTaskStackBuffer,
uint32_t *pulIdleTaskStackSize )

That is consistent with what I said above. It is inconsistent with the stack size type declaration of configSTACK_DEPTH_TYPE in xTaskCreate. That is clearly wrong but they both come from your source.

Whether the constants are set by the user or not, the internal types should, indeed must, be consistent. That would only leave the examples (included with FreeRTOS) as incorrect. However, in this case, it is not just a problem with the examples but with the kernel code.

Robert

Note, that configSTACK_DEPTH_TYPE is NOT the type that makes up the stack, but the type that expresses the size of the stack. You can easily have a machine where StackType_t could be uint16_t or uint64_t based on the alignment requirements of the processor, but the depth is best expressed as a uint32_t as that is what spans the address space.

I am aware of that and that makes the obvious type for expressing the size a size_t.

However, in the event that a different type (other than size_t) is used to express the size of a stack that type should and indeed must be consistent. You should not have the size of different stacks expressed using different types, except in some rare instances that do not make particularly well to C.

Robert

Unless I missed something when I looked at the code then, I’m not aware of vApplicationGetIdleTaskMemory() making use of configSTACK_DEPTH_TYPE as it uses the newer xTaskCreateStatic(), rather than xTaskCreate().

Originally (18-years ago, when MCUs didn’t have much RAM) xTaskCreate() used a UBaseType_t to hold the stack depth, which is the natural word size for the MCU architecture. That later became an issue for larger 8-bit devices that could not hold the stack size in a variable of that type. There were lots of conversations on the forum about solutions that would enable these 8-bit devices to declare larger stacks while simultaneously ensuring backward compatibility for all existing applications. The solution that seemed best for all involved was to change the definition of xTaskCreate() so it used a USER DEFINABLE MACROS to specify the type used to hold the stack size - hence the introduction of configSTACK_DEPTH_TYPE. As always with FreeRTOS, configSTACK_DEPTH_TYPE reverts to ensure backward compatibility if it is left undefined - but defining it in FreeRTOSConfig.h enables users to define it to whatever type they like. Therefore, if they like, they can also define it to be a uint32_t.

A I look at the code now I can’t see a bug, but after consideration that it is the application writer that actually specifies the type, if you still feel there is please open a bug report here: https://github.com/FreeRTOS/FreeRTOS-Kernel

I have learned to never assume things are obvious myself. On most 8-bit microcontrollers size_t would be an unsigned int which would be 16 bits (see e.g. http://ww1.microchip.com/downloads/en/devicedoc/50002053g.pdf) which means that size_t would be smaller than the stack size on many processors, and actually as such not appropriate. I believe it is this kind problem that lead to the type being made configurable.

I have learned to never assume things are obvious myself. On most 8-bit microcontrollers size_t would be an unsigned int which would be 16 bits

Which is indeed the appropriate size for an 8 bit micro with 16 bit addressing. size_t is defined essentially as the size appropriate to address the maximum size addressable object. So size_t is, in most cases, the appropriate size for sizing and object in memory. This makes it the obvious choice for a stack size type. There can be other cases where other sizes might suit better.

In any case, I cannot think of a case where application stacks should have a different type describing their size depending on which stack it is (there are specialized H/W stacks where that would not be the case necessarily).

Robert

They could if all uses of stack size used configSTACK_DEPTH_TYPE as the type. They do not, that is the point. In different instances, even within the kernel, different types are used to specify the size of the stack. Size as opposed to the type of the stacks member elements.

And, yes, the current consequences of this appear to be minimal but without going through and verifying that that’s the case under all variations possible of the various size types it is impossible to be sure. Doing that check only makes it difficult to be sure. It would be far better and clearer to use the same type for all cases where the size of a stack is specified.

It becomes a case of the current situation being not obviously wrong when it could be a case of obviously not wrong.

Robert

Pretty sure that’s not the case, I don’t know of any processor for which that would be the case. The only case I can think of in which it might be is if you artificially extend the address space beyond the processors addressing capabilities. Even then I don’t think you’d extend the stack beyond those capabilities, although there are some strange solutions out there.

There are processors where you can choose smaller memory models than the total addressing space (such as the x86 and C166 architectures) but size_t is sized to address the memory sizes of those models.

Of course, it’s also possible to have a non-standard compliant C compiler. But then a lot of things go out the window.

Robert

Only problem is that the device I was referring to has 18bit addressible space, so no that is just not correct.

Last I checked size_t is defined as the result of sizeof (section 6.5.3.4 of the C99 spec) and that just makes it the size of the largest single object you can declare. Since these PIC microcontrollers have paged memory the largest possible single object is much smaller than the addressible space.

You see if you want to target a single processor you can make assumptions like that, but if you want to build something that works accross most or even all processors these kinds of incorrect assumptions will just catch up with you in the end.

I would say if you want to support all of these processors the only way is going to be to let the user define the stack type in the configuration (which I appreciate here).

Perhaps it could be more appropriate to use configSTACK_DEPTH_TYPE more liberally like in vApplicationGetTimerTaskMemory it would perhaps be more appropriate the use configSTACK_DEPTH_TYPE for the last parameter?

You seem to know the standards pretty well, I assume you know who Dan Saks is? He gives 2 examples in this article.

https://www.embedded.com/why-size_t-matters/

On these 8-bit processors, especially for modified Harvard architectures, compilers often use an address space that covers both RAM and FLASH, on the 18F57K42 e.g. the program memory is 128KB (17 bits) and the data memory is 8kb which is addressible with 13 bits. In the compiler the 18th bit is used to select between the Flash address space and the Ram address space, making pointers effectively 18 bits in size for the compiler.

This arrangement allows the use of memcpy to copy from flash memory (constant strings) to RAM and is perfectly acceptable according to the C standard.

No single objects are allowed to be more than 16 bits and size_t is 16 bits on these processors.

Which makes it an appropriate type for stack size.

A few summary points

  1. size_t is an appropriate default type for stack size. To be inappropriate you would need
  • a stack greater than the largest addressable object (while technically possible, it’s difficult to see this being compatible with something like FreeRTOS)
  • a broken C implementation, more likely but unless it’s a widespread implementation the default shouldn’t cater to it.
  • Note: this works for 8, 16 and 32 bit architectures
  1. A particular implementation could decide to use a different, smaller type for stack size to optimize storage or execution speed.
  2. All instances of stack size must use the same type.
  • This could be a base type but a derived type is probably better for a number of reasons.

#3 is the most important point which is why I’ve bolded it. It’s the one that FreeRTOS breaks. And note it breaks it in the kernel. If it was just a few examples I could blame it on the example writers not understanding FreeRTOS but it’s baked into the kernel.

Robert

1 Like

I don’t see anywhere that he contradicts what I’ve said.

Robert

What you seem to be missing is that FreeRTOS is a mature product, and a policy of minimizing backwards compatibility issues, as such it has warts, just like the C language it is written in. Long time ago, perhaps as a mistake, it use basically the equivalent of unsigned int for the stack size. Newer routines, after it was realized that that was the wrong type, used the new configSTACK_DEPTH_TYPE, but some older ones might not have to keep things backwards compatible.

On 1 I disagree. It would have been great if we could use that and solve the problem, unfortunately there are more than 1 platform which are currently supported where that would not work, so it is not going to help us here.

On 2 and 3 - The type being configurable is an attempt to manage this. There are some finer nuances here you have to keep in mind. One of these is that changing types (i.e. changing the API) will cause millions of projects out there to break, so if we have a backwards-compatible way of doing this that could work, just like richard-damon was pointing out. Historically newer API’s were built using the configurable type.

Also there is a distinction in general between methods that refer to the stack size in bytes and methods that refer to it in stack words, so care should be taken when trying to make these the same, the difference is already confusing and making both of these different counts the same type could be dangerous.

Is there something specific you are suggesting we should do to improve this which is also backwards compatible and will not exclude a number of the currently supported platforms?

Really curious, on what platform would size_t not be large enough to hold the maximum size of the stack? It pretty much has to be in order for the C compiler to be valid.

Optimum is a different question.
Robert

I think this is the most important point here. We can go look at specific platforms, but the important question in a decision like this is will we lock the entire community into a situation where if some hypothetical processor comes in the future we need to change the API’s to support it.

As you probably know the stack itself is not specified in the C Standard. size_t is defined simply as the type returned by sizeof, which implies that it is has to be large enough to represent the largest object you can create.

There are however concievable architectures, particularly those with segmented memory such as paging. Some devices have different regions for byte-addressable vs word-addresable memory.

There is nothing in the C standard that prevents an implementation to use all of the memory pages in such an architecture for stack space, you would be able to communicate the setup to the linker via its standard interface be it a linker script or a command line.

It is also perfectly legal by the C standard for such an architecture to not allow a single object (on the stack or otherwise) to ever be greater than one page of memory.

This would mean that the requirement for the range of size_t would not be constrained by the total stack size, but rather by the size of the largest object (just as the C standard demands).

It would be possible to map the stack for a FreeRTOS task into such a stack and tell the Kernel that the size is X where X is larger than the largest object that can be created. This holds since the stack is not a object in terms of what the standard describes (it is e.g. not possible in C to use sizeof(stack) ).

So I would say that size_t seems like a reasonable type to use for stack size and will work on most architectures just fine, but if you want it to work on ALL architectures possible you will have to be more generic.

The idea with configSTACK_DEPTH_TYPE is that you can set #define configSTACK_DEPTH_TYPE size_t for all the architectures where the size_t type works, and you can replace it with whatever you need on all the others, so we are future proof.

Now if we can agree that this is more flexible, then we can perhaps look at specific cases where IF you set configSTACK_DEPTH_TYPE to be size_t the compiler emits a warning due to types being coerced, and perhaps there is something we can improve there?

I will repeat what I have said before, it isn’t that size_t wouldn’t normally be a workable type (maybe with a configSTACK_DEPTH_TYPE that is a typedef for most applications, so we could use something else in the unusual case), but that due to choices made many years ago differently (arguably in error) and to change that would be a backwards incompatible change.