heinbali01 wrote on Sunday, September 02, 2018:
A few additional remarks:
long start = xTaskGetTickCount();
The type long
is signed
. The result of xTaskGetTickCount()
has the type TickType_t
, which is always unsigned
. In the Zynq port it is 32 bits in portmacro.h:
typedef uint32_t TickType_t;
Now when you calculate a difference in time, you can use unsigned arithmetic as here below:
TickType_t xStart, xEnd, xDifference;
for( ;; )
{
xStart = xTaskGetTickCount();
vTaskDelay( pdMS_TO_TICKS( 1000UL ) );
xEnd = xTaskGetTickCount();
xDifference = xEnd - xStart;
printf( "Time diff: %lu ticks\n", xDifference );
}
In the above case, one would expect to see: “Time diff: 1000 ticks”. Can you please verify that?
long start = xTaskGetTickCount();
func_doing_something();
long stop = xTaskGetTickCount() - start;
The variables here above are never being read, so why would the compiler assign a value to them?
When you inspect local variables from within a debugger, you better switch off compiler optimisations (-O0
).
The tick count is incremented from within an interrupt, and that happens independently from what your application is doing.
So unless you use an endless critical section ( which disables interrupts ), the tick count should always get updated.
When I work with Xilinx/Zynq, I often use a function from their Xilinx library to measure time in micro-seconds:
#define COUNTS_PER_USECOND ( XPAR_CPU_CORTEXA9_CORE_CLOCK_FREQ_HZ / ( 2 * 1000000u ) )
uint64_t ullGetHighResolutionTime( void )
{
XTime tCur;
XTime_GetTime( &tCur );
tCur /= COUNTS_PER_USECOND;
/* Return the time in micro-seconds. */
return tCur;
}
My core frequency was 666 MHz.
The functions store the time as 64-bits so they won’t overflow quickly.