Changing TICK_RATE_HZ and CPU_CLOCK_HZ

Hello all,

I have couple of questions on tick rate and clock. I am fairly new to the FreeRTOS community, please excuse my mistakes.

Info: I’ve been working on a somewhat custom port of FreeRTOS on Raspberry Pi Zero board, thanks to some great folk on github. I’ve started to understand some concepts and how the configuration works. I wanted to compare how fast my task is running and do some comparison stuff. At this point I tried some measurements using GetTickCount funtion.

Questions:

  1. I want to use recommended tick rate as to not take too much rtos time. Default configuration was 1kHz. With 1kHZ, my task, which is doing some ode calculation stuff, takes about 55 ticks. When I lower the configuration to 100Hz, still, my task seems to take 55 ticks. This confused me, I tried to add vTaskDelay(200) in between. With 200ms delay, I get around 260 ticks. My question is, why changing the TICK_RATE_HZ has no effect on my board? Should I use some init function for this?

  2. I am somewhat confused on what the CPU_CLOCK_HZ value be. In my configuration, value was 24MHz. Tough my board has an 1GHz single-core ARMv6 CPU. I thought this value should be equal to the clock speed as defined in specs. Am I wrong to assume this?

  3. As I said, I’m doing some comparison work to get some rough idea on performance. I have a native Linux system with around 2.25GHz intel processor to take a reference execution value. It is comparing apples to oranges I know, but still the difference was huge. On this native system, where I force use a single core, the code execution takes about 26 micro seconds. Alas on my board, according to the tick calculation, it takes around 55 miliseconds. I expected it to be at least in similar ballpark range(maybe around 200-500 micro range, it is 1000 times slower). What do you think?

Hello @serhat! Thanks for reaching out.

  1. Regarding Question 1 - How are you getting the number of ticks for your count? xTaskGetTickCount returns the number of ticks since the schedule was started. You could make a call at the beginning and end of your function and subtract the final call from the beginning call.

  2. The CPU_CLOCK_HZ value should be equal to your CPU clock. More on this can be found here

  3. I’d read through this page. While I haven’t personally benchmarked the linux simulator against a real device with the same set of tasks, it would be reasonable that the results would be so different - the linux simulator doesn’t behave exactly like a real time system.

Hello @kstribrn, Thank you for your response :slight_smile: Let me give more information.

  1. I should’ve mentioned in the question thread, but you are right, I basically call twice to get tick count and check the difference value. For both 1kHz and 100Hz tick frequency, my difference was about 55 ticks. That is the reason I’m confused. I expected it to be like 5-6 ticks if the period is set to 100Hz.

  2. Thanks for the refence link. I will set it to 1GHz. One more question tough, When does this value is used? Like when a timer armed?

  3. Again I should’ve given more information here. I have a basic mathematical algorithm execution code. I run this natively on a Debian Linux without any rt/preemption and check execution time by a similar method, take clock at start and end, check the difference. This is around 26 microseconds (Which is kind of reasonable, I have several loops to execute in each iteration). Then I move same algorithm execution code to my FreeRTOS Pi. With the tick count measurement, it comes to 55 milliseconds of execution. I am aware that on one platform there is an arm and on another there is a high-end intel cpu. Tough, for such simple execution, it seemed odd that it was more than 2000 times slower.

How configCPU_CLOCK_HZ works depends on the architecture - occasionally it needs setting to the frequency of the clock that feeds the timer used to generate ticks, rather than the CPU speed. As this is ARMv6 I’m assuming the timer is somehow proprietary to the chip manufacturer - where did you get the code that configures the timer to generate the tick? Can you post it here?

There are crude ways of checking approximate tick frequencies. For example, write a simple task that toggles an LED every second, then just use a stopwatch to count 60 toggles and see approximately how close to a minute it is - good enough to find gross errors.

@rtel Thanks for answering. I am unable to provide links due to me being new user but you can find it in github if you search “RaspberryPi-FreeRTOS”. First result is from jameswalmsley but I use another repository by yrgohrm(he forked from james’s repo)

I have a task for led blink and it runs as you mention. I do not have major timing issues, for example, I have a uart task that sends a character each second and I can observe this, characters are received per one second.

Increasing CPU_CLOCK define value had no observable effect, that is why I asked whether I need to arm the timer interrupt or the value takes effect by default.

Assuming I am looking at the correct port as you described above, this function sets up the timer interrupt and it does not use CPU_CLOCK_HZ . Therefore, you do not see any effect of changing CPU_CLOCK_HZ.

@aggarg Thank you for your answer. You are correct, this is the port I am using. Thanks for the pointers. Currently I do not call this setup function. I will include this function and check again.
Hypothetical questions, what would be the end result when I set CPU_CLOCK variable incorrectly? For example, would task execution slows down? Can this configuration change have any effect on the task execution? (Ignore timer and interrupt stuff)

You do not need to include this function as it is an internal function to the FreeRTOS port you are using and therefore, is getting called already.

That depends on what is this variable used for in the relevant FreeRTOS port. This variable does not seem to be used in the FreeRTOS port you are using. Therefore, there won’t be any impact.

Ah thank you , I assumed it is some kind of a “init” function that I need to manually call so that timer registers are configured and the timer itself is armed.

Understood. Thanks for the info.

Try to depict the execution and have a better evaluation of each task processing results on a real usage scenario. Take a look at the article: TLS Protocol Analysis Using IoTST—An IoT Benchmark Based on Scheduler Traces.