xTimerChangePeriod consistently adds one tick of delay

I have a system where I want to setup a one-shot timer to detect communication timeout at the beginning of every transfer. To achieve that, I created a timer with a dummy period, then when I start the communication I start the timer using the xTimerChangePeriod API specifying the timeout value that I want.

However what I observe is that doing this consistently adds 1ms of delay, and my tick rate is 1000Hz. My assumption here is that the timer task would block until the end of the tick before yielding to other tasks.

Is this behavior normal ? I couldn’t find in the code where this delay would come from. In this system, the timer task priority is 2 and all others is 0 (I tried changing other tasks priority to 1 but no difference).

If it makes any difference, I am using the xTimerCreateStatic API to create the timer.

FreeRTOS version is v202112 and port is PIC32MZ.

This kind of suspicion can best be verified or falsified with Percepio’s Tracealyzer or a compatible tool…

Is the difference exactly 1ms? Or is it something in between? Is it more or less everytime?

If it is the less than the time you set the timer to trigger to, then I’d say that it is normal. Since the timer only has 1ms precision, the timer starts counting ticks from when you start it - even when we are midway to the next tick. This means that there can be 0ms-1ms less in the triggering time. Let me try to explain my statement with a diagram.


--------|---------|---------|---------|---------|---------|---------|---- Timer ticks
            |
            |-- xTimerChangePeriod called here with expiration set to 2 ticks interval
                            |
                            |-- // Here is where the timer gets triggered.
                                // Essentially, with 1+x (x<1) timer ticks (which
                                // is equivalent to 1.x milliseconds).
            |<-- 1.x ms --->|

Does the above diagram make sense? Let me know if this matches with what you are seeing in your case.

Thanks,
Aniruddha

1 Like

The exact use-case is for i2c transfer timeout. In the i2c state machine, I start the timer immediately after the start bit and see 991us delay on the logic analyzer before the data. If I remove the timer, the delay is approximately 10us. Therefore the added delay by the call to FreeRTOS’s xTimerChangePeriod is 980us, very close to the 1ms expected tick rate. The delay seems to be the same every time.

Let me share some code snippets :

The i2c interrupts only generate direct-to-task notifications:

void I2cMasterx_MASTER_InterruptHandler( i2c_master_common_ctx_t *ctx )
{
    /* ACK the bus interrupt flag */
    *ctx->IFSxCLR = ctx->MASTER_INT_MASK;

    BaseType_t xHigherPriorityTaskWoken;

    xTaskNotifyFromISR( ctx->xI2cMasterxTaskHandle,
                        I2CMASTER_EVENT_MASTER_INT,
                        eSetValueWithOverwrite,
                        &xHigherPriorityTaskWoken );

    portEND_SWITCHING_ISR( xHigherPriorityTaskWoken );
}

The timer callback function as well only generates direct-to-task notifications (here the timer ID is used to pass a context containing the task handle to notify):

static void vI2cMasterxTimerCallback( TimerHandle_t xTimer )
{
    i2c_master_common_ctx_t * ctx = (i2c_master_common_ctx_t *)pvTimerGetTimerID( xTimer );
    
    xTaskNotify( ctx->xI2cMasterxTaskHandle, I2CMASTER_EVENT_TIMEOUT,
                 eSetValueWithOverwrite );
}

Finally, the i2c bus peripheral registers are all written inside of a FreeRTOS task that implements a state machine. This loop starts by waiting for a task notification:

uint32_t ulNotificationValue;
        
        /* wait for a notification indefinitely */
        while ( pdTRUE != xTaskNotifyWait( 0, 0xFFFFFFFF, &ulNotificationValue,
                                           portMAX_DELAY ) ) ;

And then later the timer is enabled like so:

xTimerChangePeriod( i2c_ctx->xI2cMasterxTimerHandle, expiryTime, 
                                    (TickType_t)0 );

In the case where transfer was successful, the timer is stopped before completion:

xTimerStop( i2c_ctx->xI2cMasterxTimerHandle, (TickType_t)0 );

Edit: I found the root cause of the delay to be something else.

The task placing the transfer in the first place (initially sending the first xTaskNotify ) had no taskYIELD after the call and would enter a spin-lock waiting for the transfer to complete. I had assumed it would be preempted, but it seems like it uses the entire time slot since its priority is the same.

What is odd however is that this behavior is only observed after adding the xTimerChangePeriod

1 Like

Better initialize
BaseType_t xHigherPriorityTaskWoken = pdFALSE; in the ISR.

@rtel Maybe the sometimes missing init in user code comes from the example

I think it’s also not the very best thing that the flag is static there. Probably a copy and paste thing. Both should be improved (consistent to the other examples).

@maharvey Thank you for reporting your solution.

@hs2 Thank you for pointing those documentation issues. I have forwarded them to our documentation team.

1 Like