Problem having `portMAX_DELAY` negative

Hello.
I just wanted to point out one issue with TickType_t and portMAX_DELAY that may not be obvious. The portMAX_DELAY is not just a special value that means ‘infinity’. It must also be large enough to compare bigger than any timeout passed to an API function.
If TickType_t is uint32_t and portMAX_DELAY is 0xFFFFFFFF, everything works fine. But if you change the tick type to int32_t (signed), portMAX_DELAY becomes -1. This would actually be a good value for ‘infinity’, since the wait time should never be negative. The problem is that the sorting algorithm inside the kernel expects portMAX_DELAY to be the largest possible value, and if it is -1, the sorting doesn’t work well and the kernel gets stuck in an infinite loop inside vListInsert() the first time the blocking API function is called. Changing portMAX_DELAY to 0x7FFFFFFF (or INT32_MAX) solves the issue.
I think it might be worth stating in the documentation what the restriction is on the value of portMAX_DELAY.

Have a nice day,
Stepan.

Actually, that is more a restriction on the type of TickType_t, and the assumption that it is an unsigned type is likely spread through a large part of the code base.

My guess is that trying to make the code work totally correctly with a signed TickType_t is large enough to not be worth the effort.

Well, that’s good to know. However, it seems to work fine with the signed type now, but I’ll take your advice and change it to unsigned type.
Thank you for your reply.

I think the biggest assumption of unsignedness is for the counter roll-over logic, which check for the tick + 1 == 0 to detect roll-over, and that the difference between a later tick value and an earlier tick value in the same epoch will always be positive. Depending on the speed of your tick, for a 32-bit tick this may hold for quite a while with signed, but WILL break at some point in a long running device. (at a 1ms tick, 32-bit signed will hit the problem in about 24 days).

As richard-damon says, the tick count handling assumes an unsigned type. This is in part for efficiency, but also in cases where the type used to hold the tick count was 16-bits, as was common in the very early days of 8 and 16-bit ports. Even when unsigned, 16-bits doesn’t allow for a very long delay. The type is set in the port layer, and I don’t think is user modifyable beyond the configUSE_16_BIT_TICKS setting (which these days is used more for testing as it allows frequent tick wrap arounds).

Thank you both for clarification.
We were actually using int64_t as the tick type so that it would “never” rollover. But in the end I decided to change it back not only to unsigned type, but also to 32 bit type for multiple reasons. Having a 64bit tick type throughout the kernel (on Cortex-M) would have a noticeable performance impact (including a critical section needed to read ticks). Also, as I figured out from other discussions, on-chip OS-aware debugging might not work well. Now we use an application tick hook to increment our custom 64bit tick timer.