Changing core clock frequency and systick on LPC4370

Hi,

I’m using Freertos on a LPC4370 and, in order to optimize power consumption, I’m trying to change the core clock frequency depending on what section of code the core is running.

However, since the core’s Systick timer is being used to generate the OS’s tick and it uses the same clock source as the core, and to keep the freertos tickrate frequency constant, when the core clock frequency changes, I’m also changing the Systick config registers. Still, so far I’ve been having trouble generating Freertos ticks at the correct rate.

The procedure I’m following is:

  1. Stop the systick and disable tick interrupts by writing to the control register
    
  2. Change core clock frequency by calling NXP's setupcoreclock()
    
  3. Call NXP's SystemCoreClockUpdate()
    
  4. Update systick's loadreg
    
  5. Set the systick's currentvalue to 0 by writing to it
    
  6. Start the systick and enable interrupts by writing to the control register
    

The clock settings for the peripherals remain unchanged as they are fed from other base clocks, unrelated to the core clock.

So far I’ve found that everything seems to work when the core clock changes are made between frequencies on lower ranges, e.g. from 12MHz to 90MHz. If I call xTaskGetTickCount I can verify that the OS ticks are being incremented at the correct rate.

However, if I try to move into higher frequency ranges, e.g. changing the core clock from 12MHz to 204MHz, by calling xTaskGetTickCount I can verify that the tick count is increasing ~30x faster than expected. This seems to indicate that the systick isn’t generating interrupts at the correct frequency.

While trying to debug this problem I’ve done the following tests:

*Changing the Systick config without changing the core clock. I’ve found that the internal timekeeping changes in proportion to the value in systick loadreg, as expected. That is, setting the systick to count double the correct number of clock ticks, halves the systick interrupt frequency and for every two real seconds the microcontroller counts 1 internal second.
*Changing the core clock frequency without changing the systick configuration registers. I’ve found that, unexpectedly, the internal timekeeping seems to be running 30x faster than before. Since the core frequency changed from 12 to 204MHz, I expected the internal timekeeping to be 17x (204/12) faster than the real time.
*Running the core at 204MHz from chip initialization produces the expected behavior and internal time is kept correctly.

Note that having the internal timekeeping run 30x faster in the second test is the same behavior as when the Systick registers were being updated on the first try, when both the clock frequency and the systick were changed, which is not expected. From the first test I know that without changing clock frequency, updating the systick configuration registers produces the expected behavior. It is unclear why changing clock frequency affects that.

Furthermore, I’ve found the information present at the user manual to be confusing. While looking for details on the operation of changing core clocks, I’ve found subsection “13.2.1 Configuring the BASE_M4_CLK for high operating frequencies”, from chapter “13 LPC43xx/LPC43Sxx Clock Generation Unit (CGU)”, and the procedures it describes. Since every procedure listed explicitly includes the step “enable the crystal oscillator”, I find it is implied that in order to run the core at more than 110MHz one needs to use an external crystal oscillator. However from the information specified in the rest of chapter 13, it seems that there should be no trouble to run the clock at 204MHz with the Internal RC as clock source, as it falls within the specified capabilities of the CGU and PLL1. This seems to be aligned with the results from the third test that show that running the clock at 204MHz from the start works as expected and internal time is correctly kept. Additionally, I’ve found that the datasheet also seems to imply that an external high-frequency oscillator isn’t needed to operate at maximum frequency: "Three PLLs allow CPU operation up to the maximum CPU rate without the need for a high-frequency crystal. "

I’m out of ideas on how to try to solve this problem and any help or hints would be greatly appreciated.

If I’m reading this correctly then setting then the configuration to set the faster clock speed works if it is the first speed you set, but not if you set to a lower speed first and then try to raise the speed. If that is the case then it sounds like a quirk of the NXP hardware and you might be best asking NXP for support - for example the PLLs may need a settling time. If you do that then try to get some data on the result of different setting sequences first - is it possible to output the clock on a pin and measure it with a scope?

There is also a small possibility that calculating a change in clock speed results in an integer overflow somewhere in the calculation that doesn’t occur when just initially setting the clock - doubt that is the case though but worth stepping through the code to see.

Another thing to look at is just because the default port uses the core counter for the tick, doesn’t mean you can’t change it. If you part has a timer that is free that you can program to run at.a fixed rate even as the core changes speed, then changing the system tick to use that counter gets around all these problems.