Add tracing functionality for all API calls

Hi,

I would like to propose some additions to the Kernel, so different trace tools can be supported out-of-the-box.
Current implementations usually only trace a few specific cases, which do not show much about the working RTOS.

When tracing a system one is usually interested in information regarding API calls as well the related information, like call parameters and runtimes.
There currently seems to be no easy way to implement such a tracing instrumentation, other than maybe some toolchain-dependent function wrapping.

So to natively support this kind of system tracing I would like to propose new trace macros which are placed at the beginning of each public API function as well as before any of their return statements.
This way all parameters, results as well as runtimes can be traced by simply defining the used macros.

What do you think? Would this be a feasible approach?

Following an example of these changes with newly added “traceAPI_uxTaskPriorityGet()” and “traceRETURN_uxTaskPriorityGet()” macros.

    UBaseType_t uxTaskPriorityGet( const TaskHandle_t xTask )
    {
        TCB_t const * pxTCB;
        UBaseType_t uxReturn;

        traceAPI_uxTaskPriorityGet( xTask );

        taskENTER_CRITICAL();
        {
            /* If null is passed in here then it is the priority of the task
             * that called uxTaskPriorityGet() that is being queried. */
            pxTCB = prvGetTCBFromHandle( xTask );
            uxReturn = pxTCB->uxPriority;
        }
        taskEXIT_CRITICAL();

        traceRETURN_uxTaskPriorityGet( uxReturn );
        return uxReturn;
    }

The current trace macros are placed to trace task and scheduler behaviour - why a task is or isn’t running, etc. So I would see the function call tracing in your proposal to serve a different purpose, or at least, provide a different view.

Anything that helps the developer has merit, so on the face of it I like the idea. Some thoughts on the practicality:

  1. We need to consider the balance of benefits brought Vs the additional code size and clutter (readability).

  2. Function entry and exit tracing will generate a lot of data. It may only be feasible with tool support. Trying to log strings, for example, won’t be possible because of the time taken to print and, probably, the additional code size (lost of const strings).

  3. There probably still needs to be some kind of execution pattern tracing too, so you can see where task switches occur, otherwise task switches in the middle of a function will be hard to decipher in a log. This implies tool support too.

If there are no other opinions on this, I’m inclined to go ahead with the proposal.

I feel that this change would be a little heavy-handed just for tracing capabilities?

I feel like if a developer is looking for this level of granularity they should probably just be running inside of a debugger. Additionally, I feel like this would quickly generate an amount of information that almost becomes meaningless. Then if we account for the possibility of being pre-empted after calling the proposed traceAPI_<FUNCTION_NAME>, that trace could become confusing to follow. Where I feel this could quickly lead to this being just one more thing to debug when trying to use it for actually generating traces.

Lastly, I’m worried about the level of maintenance overhead to the FreeRTOS-Kernel if we went with this approach. Any time a function gets changed or modified would you then need to make sure the corresponding trace function gets updated? I feel like the same functionality could be obtained by just creating a generic function/macro: trace_API(char * functionName, TaskHandle_T xTask);

3 Likes

Would be interesting to get Percepio’s input on this possible feature.

I agree with @skptak that it would cause huge changes to the source code. And an exception must probably be made for ISR’s and critical sections, which makes it less “complete”.

I am using a module eventBuffer.c that can trace in any part of the code, including interrupts and critical sections.

It needs an interrupt-safe version of snprintf() and a high-precision clock, possible a built-in timer/counter.

This is how logging is composed:

   eventLogAdd( "ISR %04X %04X", ulOldStatus, ulStatus );
   eventLogAdd( "Wait TC %04X", rc );

The resulting strings will be written to a circular buffer of e.g. 100 strings. They can be printed from the application, or inspected with a debugger.

This is how the logging may look:

Nr:        ms   events
   0:     0.000 Taken 0001 0003
   1:     0.002 break1
   2:     0.025 Read 1 sectors
   3:     0.006 Wait TC 8003
   4:     0.003 ISR 0000 0001
   5:    70.706 ISR 0001 0002
   6:     0.003 ISR Sem Give
   7:     0.004 Taken 0001 0003
   8:     0.002 break1
   9:    90.907 Read 1 sectors
  10:     0.007 Wait TC 8003
  11:     0.003 ISR 0000 0001
  12:    70.706 ISR 0001 0002
  13:     0.004 ISR Sem Give
  14:     0.003 Taken 0001 0003
  15:     0.003 break1
  16:     0.026 Read 1 sectors

It has shown to be very useful when solving race conditions, like in my PR #33.

I also often used it while optimising FreeRTOS+TCP drivers.

And yes, I temporarily had to litter existing libraries adding calls to eventLogAdd().
When USE_LOG_EVENT equals 0, nothing will be logged.

Yes, the purpose is slightly different to the existing trace macros.

Recording API function calls helps more for profiling and tracing the user application, rather than tracing the kernel.
Recording the function parameters and return value is optional, but can provide more insight into the system.
In combination with the existing instrumentation it is also possible to see what happens on an API call, such as a context switch when a semaphore is taken.

Surely, for meaningful analysis and to keep the execution and size overhead low, tool support would be recommended.
The macros could be set to use the eventBuffer module, even though this might cause huge overhead at least when recording everything.
SEGGER SystemView for example can efficiently record without logging whole strings and can analyze the events in the contexts of their task. I think Tracealyzer could make use of this functionality, too.

The nice thing is, users who want to profile or trace their firmware do not have to modify the kernel code, but can do it in the config.

I am not sure how stable the FreeRTOS API is.
When a function changes, there is probably more to do. But yes, the macro would most probably need to be updated too.
That is also why I would instrument public API only. Internally tracing the kernel is out of scope.

One problem with using FreeRTOS objects in such a trace is you will hit infinite recursion.

Everything called to do a trace, must not itself try to trace things.

And then you get the possible race conditions that prevent you from using flags to communicate to try to break the recursion.

As there seem to be no further objections I went ahead and created new pull-request.
So further details may be discussed there.

1 Like