What does portTICK_RATE_MS stand for?

nobody wrote on Friday, December 22, 2006:

In most platforms portTICK_RATE_MS is defined as "( ( portTickType ) 1000 / configTICK_RATE_HZ )", while configTICK_RATE_HZ is defined as 1000 also. So portTICK_RATE_MS value is 1. What is this? 1 ms per tick? 1 tick per ms?

In the help file I saw:
“// We are to delay for 200ms.
static const xTickType xDelayTime = 200 / portTICK_RATE_MS;”
But I don’t understand this. so delay 200/1=200 ticks means delay 200ms? Thanks for any response!

rtel wrote on Friday, December 22, 2006:

portTICK_RATE_MS is only used by the demo applications.  It is the "tick rate in milliseconds" which is a bad description for the number of milliseconds between each tick.  Therefore with a tick frequency of 1000Hz the tick rate in milliseconds is 1 - there is a tick interrupt every 1 ms.  With a tick frequency of 100Hz the tick rate in milliseconds is 10 - there is one tick interrupt every 10 ms.

It was intended to allow the demo applications to execute and behave the same at any tick frequency.  Therefore in the demo application tasks any delay times are calculated referencing portTICK_RATE_MS.  As you change the tick frequency using the application defined configTICK_RATE_HZ the delay periods used by the demo applications would automatically adjust. 

I agree it is not clear and it has problems of resolution (does not work at all if the tick frequency is greater than 1000Hz?).  It does not however effect the kernel itself, just the demo apps.

Regards.

nobody wrote on Friday, December 22, 2006:

Got it. Thank you! And add a comment in the source code may make it better.