Hisghest visible tick rate

anonymous wrote on Wednesday, February 09, 2011:

Hey people, i want to know what is the highest visible tick rate in FreeRTOs, i mean, the highest rate that gives me the best precision of the tick, and i want to store it in a variable, for example an unsigned long.

What do you suggest?

Thanks all

richard_damon wrote on Wednesday, February 09, 2011:

The first question that comes to mind is why do you need a high tick rate? If you just need to record when things happened precisely, it would be much better to just have a second timer running that you record when the events happen. The normal thing that controls what you want your tick rate to be is the duration and precision needed for timeout for queue. etc, which normally is not that fast.

frankliege wrote on Thursday, January 19, 2012:


There are several reason’s or may be people like me who didn’t use much RTOS before do not understand. I need high tick rate because. I am using Stellaris 9B95 running at 80 MHz. I am using FreeRTOS + LwIP for web server, TCP to Serial, DHCP Client, ARP and Telnet. I also have a high priority task to manag communication on high speed serial network that runs on 5 mbps data rate polling multiple nodes one by one. The task needs to run atleast 1000 Hz rate.

So I keep 1000 Hz Tick rate, the high priority task runs fine, but LwIP and other Web related tasks doesn’t run at all. My task takes 400 uS every time. So out of 1 ms between tick I have 60% CPU time free and I believe, that can take load for TCP tasks.

I need at least 2000 Hz tick rate to run other tasks. With 2000 Hz tick rate, every other tick will run high priority task and remaining tick can run TCP tasks. But that didn’t work either. So you may throw some light, may be restructuring the priorities or something.

richard_damon wrote on Thursday, January 19, 2012:

What do you need a high tick rate for? High Data rates do NOT need high tick rates. You should only need a high tick rate if you have activities that happen at a high rate BASED STRICTLY ON A TIMER, and not a “data ready” interrupt from a device, or if you need to precisely control timeouts.

You say that your high priority task is “polling”, if you are truly doing that, that is killing your system. One of the biggest purposes of a RTOS is to remove polling, and converting activities to interrupt based and run on demand.

If your polling is at a 1KHz rate you are sending out messages to a bunch of devices and waiting for replies, (and all the I/o is being interrupt driven), and that is saturating your processor,  then you need more processor or a more efficient I/O system.

ALL devices should be running interrupt based, with the interrupt handling a minimal data gathering step and passing that on to a task to process. Then you will probably find that you can use a 10ms or maybe even a 100ms tick rate (those are about right for the timeouts needed by those protocols.

jon_newcomb wrote on Friday, January 20, 2012:

It’s a common mistake that increasing the tick count will cause a task to ‘wake up’ more often to receive incoming data from a queue.
If a task is blocked waiting on a queue and that queue has data written to it by, for example, the Ethernet controller ISR then the task will start running almost immediately. This statement is true if your tick rate is 1000Hz, 100Hz or 10Hz. (for guidance I have never used more the 50Hz).
But a golden rule must be followed - avoid any task polling in a tight loop waiting for something to happen. This eats processor time and can starve other tasks. Block instead.
Anyway… I digress. I think the initial post requested a more ‘fine grain’ reading of the tick count, not ‘increasing the tick rate’… (at least I hope they are not suggesting this)

This is what I have used to increase the granularity - an extra function added to task.c
It combines the xTaskGetTimerCount() and the ‘timer/counter’ value that is used to increment the tick counter.
It is only for the SAM7 processors as reading the ‘timer counter’ is hardware specific (in my case, the PITC timer counter), but it’s a starting point.
The complexity is due to the danger of interrupts / task switches from corrupting the value. There are two version, use one from with an ISR and the other within a task.

* \brief Access the fine-grain timer.
* \note This *must* be called with interrupts off so that no interrupt
* occurs between calling xTaskGetTimerCount() and reading PITC_PIIR.
uint32_t nowISR ( void )
   unsigned int t;
   unsigned int pitc_piir;
   // Get the total PIT counts seen by the tick interrupt
   // and the PIT 'fine' value
   t = xTaskGetTimerCount ();
   pitc_piir = AT91C_BASE_PITC->PITC_PIIR;
   // If the timer overflowed then add the offset
   if(pitc_piir & AT91C_PITC_PICNT) t +=  portPIT_COUNTER_VALUE;
   // Add in the current PIT counter value
   t += pitc_piir & AT91C_PITC_CPIV;
   return t;
* \brief Access the fine-grain timer.
* \note Never call from within an ISR - this could add cause jumps of
* a number of ms (defined by OS Tick)
uint32_t now (void)
   portTickType timerCount0, timerCount1;
   portTickType timerCount2;
   timerCount0 = AT91C_BASE_PITC->PITC_PIIR & AT91C_PITC_CPIV;
   timerCount1 = xTaskGetTimerCount ();
   timerCount2 = AT91C_BASE_PITC->PITC_PIIR & AT91C_PITC_CPIV;
   if ( timerCount2 < timerCount0)
      timerCount1 = xTaskGetTimerCount ();       // there has been an interrupt
   return timerCount1 + timerCount2;

frankliege wrote on Wednesday, February 01, 2012:

The polling status and passing to other node is timer based. Every 1 ms, timer invokes the task, sends packet out, checks for any received responses, parses it and updates child node status structures and also updates the time out counters. This is strictly on timer interrupt based. UART interrupt is used only to move the FIFO data to ring buffer (I can remove that using ping pong DMA).

The entire process takes around 450 uS, leaving about 550 uS idle time for CPU.

This function has highest priority in system. WIth 1000 Hz tick rate, it runs every tick. At the end of when processing is over (Around 450 uS), the task YIELDs back to scheduler. At this time it does not yield to other task. How can I force scheduler to start lower priority task at this point.

richard_damon wrote on Wednesday, February 01, 2012:

YIELD means, dismiss task and let any other SAME OR HIGHER tasks take time. Lower priority tasks will not run, as this task is still marked as “READY”, and the scheduler will alway run the highest priority task (with equal priority selected via round robin).

You want the packet task to DELAY, to wait for the next timer tick, then it is “NOT READY” and the other tasks can run.

It sounds from your description that you are sending out a new packet every 1ms, on a timer basis, and if this is so, a 1ms timer tick is probably what you need.

Having one task taking 45% of your processor would make be a bit nervous, as that is very high to me, unless it really is the only big consumer of CPU time.  I generally try to leave an ample headroom of CPU time for the eventually feature upgrades that normally come.

frankliege wrote on Wednesday, February 01, 2012:

So what’s the method to run lower priority task in between ticks after dismissing current higher priority task and making it NOT READY.

richard_damon wrote on Thursday, February 02, 2012:

As I said, you need to make the task NOT READY by using the delay function, whose full name is vTaskDelay, in your case it would be a simple vTaskDelay(1); statement.

rtel wrote on Thursday, February 02, 2012:

So what’s the method to run lower priority task in between ticks after dismissing current higher priority task and making it NOT READY

You don’t have to do anything - that is what the kernel does!  You might want to read the FreeRTOS tutorial book, or similar text on multitasking.