FreeRTOS TCP/IP with tickcount < 1mS

If we configure the TickCount < 1ms, the portTICK_PERIOD_MS is zero, but in FreeRTOS TCP is this used.

whould it not better, if FreeRTOS TCP use the pdMS_TO_TICKS incase of portTICK_PERIOD_MS?

Yes sure it is better to use the macro pdMS_TO_TICKS(), Here without the usual casts:

#define pdMS_TO_TICKS( xTimeInMs ) \
        ( ( ( xTimeInMs ) * configTICK_RATE_HZ ) / 1000U )

The macro portTICK_PERIOD_MS is being used less and less.

But also I would like to say: think twice before you increase the clock-tick speed. If you want a precise measurement of time, it is much better/cheaper to use a hardware timer.

And if something must happen exactly at a certain moment, you can also use the interrupt to wake-up a task ( using task notification ), and make sure that the task has a high-enough priority so it can run.

There is clear, that this whould possible to use the task notification to wakeup a task at 100us. But in this case, the there we whouldn’t have the possible the timemeasure from FreeRTOS of 100uS. For Excample sleep 200uS and so.

I whould propose, that this 3 possition, where FreeRTOS TCP use the portTICK_PERIOD_MS replace with pdMS_TO_TICKS, like it’s done in the FreeRTOS Kernel

Hi, I also want to measure the execution time precisely within a task, could you please elaborate on the usage of hardware timer to be used?

Hardware used: imx8mp -Cortex A-53(Linux) and Cortex M-7 (FreeRTOS)

Does this help - FreeRTOS Run Time Stats

I believe the Run Time Stats would be useful when there are multiple tasks considered.
Would this also work for measuring the time within the task scenario mentioned below?

For ex:

  • receive data from the A-53 processor
  • measure the receive time
  • send data back to the A-53 processor
  • measure the transmission time

Have used xTaskGetTickCount() for the same, but want to get more precise time

You can set a hardware time to run at whatever frequency is best for the granularity of your measurements - then read the timer at the measurement point to see how much time elapsed between the two.

xTaskGetTickCount just informs you about the number of kernel ticks. Usually, one kernel tick = 1ms.

If you want a smaller period than that, as @htibosch mentioned, you should use a hardware timer.
You can find the reference manual of the processor here. Note that you’ll need to sign in to download the manual.

Search for “General Purpose Timer (GPT)” section in the manual. That should tell you about all kinds of things required to run the timer - like frequency/reload value/auto-reload etc.

And once you have the timer running, you can use the difference in measurements (as @rtel pointed out above) to calculate the elapsed time.

Hope this helps,
Aniruddha

Yes, you can us the Run Time Stats function to get more precise timing. The precision being based on the speed of the timer.

As of the most recent versions, you can also make this function be based on a value bigger than 32 bits, so you can span longer times accurately.

Maybe these examples are of any help: Zynq and STM32F4x

The former example uses a Xilinx function XTime_GetTime(), which reads the 64-bit Global Timer Counter Register. Not sure if that register is available in an A53.

The second example uses a generic timer/counter, as suggested by @rtel in this post.
The TC will count from 0 to ulReloadCount, and trigger an interrupt after 10 seconds. The time is calculated by adding the slow and the fast count, see ullGetHighResolutionTime().

Hello,

Thanks for the support!
I was able to measure the precise time using the GPT timer. I would also try with the other solutions provided.