Measuring Real-Time performance

tns1 wrote on Thursday, September 25, 2008:

Are there any FreeRtos projects setup specifically to measure real-time performance, or perhaps suggestions of how best to go about this type of testing? 

I am looking for a way that I can reasonably claim that the overall system and components (OS) are time deterministic to some degree. This may mean that all scheduled tasks begin execution within x msec of their ideal start time, or it may mean that the relationship between loading and latency is characterized.
This type of proof could be data collected from tracing or scope measurements (or even simulation), but I’d also be interested in seeing any claims about the OS components, “all APIs execute O(log n) or better” type of claims.

Since measured performance is application dependant, I am thinking two categories of tests. One would focus on measuring the timeliness of the scheduler and other OS APIs, and might have a set of dummy tasks with adjustable loading to see how loading affects determinism.

The second type of test would focus on the application, and help identify problem areas.

thanks

rtel wrote on Thursday, September 25, 2008:

> Are there any FreeRtos projects setup specifically to measure
> real-time performance, or perhaps suggestions of how best to
> go about this type of testing?

You might like to look at the LM3Sxxx demos to start with.  These have standard tasks running at various priorities (including a WEB server), with medium frequency interrupts that nest with each other (and the kernel) fight to access a common resources (queues), and finally a high frequency (20KHz) running at a high priority.  The LCD then displays the jitter measured in the high frequency interrupt.  The CPU is heavily loaded.

Measuring real time performance is very difficult unless you have a clear definition of exactly what you are interested in.  I always tell people to believe nothing that is written and take the measurements themselves.  You only have to look at the claims and counter claims about the quality of the GCC compiler compared to commercial compilers (most of which are complete nonsense when you actually look into the test environments), or the speed of one processor when compared to another.

I’m getting a bit off topic and philosophical maybe but…improving the ‘performance’ of FreeRTOS as measured by how quickly a semaphore can be given, or data to be passed to a queue, etc. would be very easy.  All I would have to do is remove the scheduler locking and responsiveness tuned execution flow and just stick the entire function definition in one big critical section.  While this would make the functions super quick, I’m sure you would agree it would not make a better system as responsiveness would nose dive.  Engineering is always about finding the best compromise given the resources (CPU power, RAM, etc.) available.

> I am looking for a way that I can reasonably claim that the
> overall system and components (OS) are time deterministic to
> some degree. This may mean that all scheduled tasks begin
> execution within x msec of their ideal start time,

This is going to be very application dependent - FreeRTOS.org is not a time/space separated kernel, like Green Hills Integrity for example.  You can assign your task priorities with consideration of ideal start times, with the help of various analysis techniques, etc,.  It is still going to depend on your application design more than the RTOS.

> or it may
> mean that the relationship between loading and latency is
> characterized.
> This type of proof could be data collected from tracing or
> scope measurements (or even simulation), but I’d also be
> interested in seeing any claims about the OS components, “all
> APIs execute O(log n) or better” type of claims.

The trace macros are provided for taking timings, etc.

> Since measured performance is application dependant,

I should have read down this far first ;o)

> I am
> thinking two categories of tests. One would focus on
> measuring the timeliness of the scheduler and other OS APIs,
> and might have a set of dummy tasks with adjustable loading
> to see how loading affects determinism.
>
> The second type of test would focus on the application, and
> help identify problem areas.

Can I ask what you are requiring this data for?  I might be able to suggest something appropriate.

Regards.

tns1 wrote on Wednesday, October 01, 2008:

This is needed to justify the choice of RTOS/scheduler. I am required to present proof that the OS/app will behave deterministically with some definition.

A loose definition seems to be that the code will not hang indefinitely. Although important, this isn’t a useful definition. In the strictest sense determinism implies exact predictability in space/time across all operating conditions. This does not seem achievable in any complex design.

RTOS vendors claim determinism, but then don’t define it. Why not create a useful definition independant of processor choice, that addresses both the OS and the app? Something like:

A) the kernel API execution order (complexity) is listed in table G.
B) at dispatch time, the scheduler will always start the task with the highest priority.
C) the latency (or jitter) of task starting time is bounded by (formula H?) provided the app is written following these guidlines:
i  tasks do not dynamically allocate or de-allocate memory
ii app is written following RMA restrictions
iii tasks avoid use of API X,Y,Z
D) Benchmark J shows latencies & jitter for a specific implementation.