uxTaskGetStackHighWaterMark affecting task timing?

Hi, I have several tasks. I am using the approach of vTaskDelayUntil to allow my tasks to begin at a periodic interval (vanilla Code taken from Mastering_the_FreeRTOS_Real_Time_Kernel-A_Hands-On_Tutorial_Guide). I have also implemented measuring the period at which the task is running, using a xTaskGetTickCount() as soon as the task begins to run, and subracting this from the previous iteration value. Under normal circumstances the measured period is equal to the ‘wait until time’. - so far so good. (I am doing this to detect if a task over runs, if there is a better way to do this please tell me).

I then introduced using the high tide diagnostic, uxTaskGetStackHighWaterMark. Once this was introduced, I occasionally get incorrect measured periods. (about 1% of the time).

Originally I checked all of the tasks stack depth inside the same task, and thought maybe there was some context switching happening. But I have now moved the check to each own task and I still see these occasional errors in measured period. The error values can be larger or smaller than the expected period.

On running some test, for a period of 200 (2ms @ 10uS tick)
I see 190, 220, 189, 211,212,189. etc.

I should say one further thing, is that I have changed the system tick to be 10uS rather than 1msec. Its not easy to go back and undo this due to the system.

I was wondering if there was something in the implementation in the uxTaskGetStackHighWaterMark that would cause the task timing to be slightly different sometimes.

e.g.
const TickType_t xDelay = pdMS_TO_TICKS( 2 );
while(1)
{
xStartTime = xTaskGetTickCount();

/* do the task stuff */
{
blah blah
}

/* Prove that task is running at expected rate /
TaskInterval = (xStartTime - LastRunTime);
LastRunTime = xStartTime;
if(TaskInterval != xDelay)
{
/
Here I see the occasional error */
}

/* Check the stack usage If I remove this line, I do not see errors */
WaterMark_u32 = uxTaskGetStackHighWaterMark(TaskConfig_as[TaskNr_u8].Handle_s);

vTaskDelayUntil( &xLastWakeTime, xDelay );
}

The function’s implementation is here. I can’t see anything that would impact timing unless you have a large portion of a large stack that is unused which is taking a really long time to parse. 10uS is a very short tick period though - I suspect a good portion of your total execution time is spent servicing the tick interrupt - which could itself impact timing.

Thanks for such a prompt reply. Looking at the implementation I see it is a while loop from the end of stack until it finds data, which I agree should not affect timing. However, I did a search for that function, and I see that there is a PRIVILEGED_FUNCTION version. I have not researched this, but could there also be context switching to measure the stack in that case? Measuring it from the kernel? Originally when I measured all stacks from one task, I think it affected the timing in each task, so possibly affecting the ‘wait until’ value?

My stacks are 4096 bytes (So therefore 1024 words). From the measuring I can determine that all 3 are over 75% utilised.

Thanks for the comment about the tick. I will re-think this part of the design. I did that in order to achieve better granularity for measuring. I should set up another timer on the Micro instead. I could probably go back to 1ms then. In which case the error might disappear since it is only about 100uS.

I do think it is interesting that including that code does have a slight affect on the timing though. Thanks again.

what is your mcu frequency? I would agree with Richard that for most common platforms, 10us will be demanding for most platforms.You should really do your comparison with the same base timing.