I was just wondering how low the period of the TICK TIME be?
Can we use TICK TIME as low as 10us or 1us?
Kind Regards

What do you mean exactly with ‘TICK TIME’ ?

Hello @hs2
I mean what can be the lowest periodicity for TICK TIME in FreeRTOS?
Can configTICK_RATE_HZ be set to 100000 for 10us Periodicity or 1000000 for 1us Periodicity in FreeRTOSConfig.h? See PIC Below.

Will setting the Periodicity of TICK Time so low effect the Timers or Event Groups as I am using these for my Implementation?

Kind Regards

As Richard explained in this post, the highest possible tick frequency will be the frequency at which all processing time is spent inside the tick interrupt handler. So, yes, you can set it to that high value - the question is why do you want to do that?

Why not ? It’s just a number at the first place :wink:
But your MCU should support the systick at the desired rate and you should be aware of the consequences. The preemptive scheduler is invoked at this rate causing a certain overhead.
The systick rate defines the resolution of the FreeRTOS timers and timeouts of blocking FreeRTOS API calls.
Why do you think you’d need such an extreme high tick rate ? Which bigger problem do you want to solve ?

@hs2 @aggarg Thank You. There are no major problems that we are facing currently when I use TICK TIME as 1ms.

  1. I wanted to use 500us non-blocking delay using vTaskDelay.
    Example:- If TICK TIME is 100us , Then we can get a delay of 500us or 5 TICKs using vTaskDelay(5).
  2. Is there any Possible method to measure context switching time???

Kind Regards

First, vTaskDelay is a BLOCKING delay, in that the task that calls it is blocked, and the system will switch to another task, and at the end of the delay, the task woken, and possible switched to if preemption is on and the task is the higher priority than the current running task.

You can get a rough idea of the context switch time by setting up a program that sets a GPIO one way, then blocks, and the task that it is going to switches the GPIO in the other direction. If you list your processor and processor speed, someone may have a rough idea of the time. It will vary on conditions slightly. I think it is often on the order of hundreds of instruction cycles.

If you want a NON-BLOCKING delay, I would have a counter in the system running at high speed (many processors have a cycle counter running at processor speed), and watch it to increase by the needed amount. If you need just an occational BLOCKING delay of that sort of time, I would see if the processor has a counter/timer that you can trigger to interrupt after the needed delay and then block until that ISR happens. If you set the tick rate high, you impose that overhead ALL the time. I tend to use a 10-100 Hz (10ms - 100ms) tick rate/period and this sort of timer trick when I actually need a short delay. Often, the thing that needs to happen at that time delay can actually be done in the ISR, and I can avoid a lot of the high speed context switches.

One thing to note is that as the tick frequency increases the proportion of time which is spent on the tick interrupt handler increases. This isn’t to say that the tick interrupt takes longer, rather that the tasks which are being swapping in an out are being switched after less time/fewer cycles executing.

If you are set on using the tick for timing, I’d suggest using the largest value you can tolerate for your timing needs. You mentioned a 500us timing need - your application might work out okay with a 500us tick period. Testing would let you know more. For a non-blocking wait, you’ll want to use the xTaskGetTickCount. I would generally avoid creating a NON-BLOCKING wait though as one of the major points of an RTOS is to schedule around tasks which cannot execute (due to priority, blocking, etc).

If you’re open to not using the FreeRTOS tick for a non-blocking wait, richard-damon’s solution sounds like a winner.

@richard-damon Thank You. I will try this method to find the Context switching time. I am using Cortex R5F with 80MHz CPU Speed.

@kstribrn Thank You for the suggestion.

Not totally familiar with that processor, but I suspect you will find scheduler overhead on the order of small single digits microseconds, so 100us ticks would be costing you several percent of your CPU throughput just for the tick

1 Like