Data share between the tasks

savindra wrote on Monday, May 30, 2016:

I want to share global data between two tasks which have different priority level.

Let me explain what i am doing:

  1. I have two task TaskA(receiving data from serial module at 10ms rate) and TaskB(sending data to ethernet)
  2. TaskA has high priority than TaskB.

Below are my tasks:

void TaskA( void *pvParameters )
( void ) pvParameters;
portTickType xNextWakeTime;
/// Initialise xNextWakeTime - this only needs to be done once
xNextWakeTime = xTaskGetTickCount()

///Init Serial Module

		/// Place this task in the blocked state until it is time to run again 
	///The block state is specified in ticks, the constant used converts ticks to ms
	vTaskDelayUntil( &xNextWakeTime, configTASKA_FREQUENCY_10MS );
    ///receive data from serial modbule and write into buffer
    gusFastOutputParameter1 = (uint16_t)(g_pucRxBuffer[3] << 8);
	gusFastOutputParameter1 |= (uint16_t)(g_pucRxBuffer[4]);  


void TaskB( void *pvParameters )
( void ) pvParameters;

///Init Ethenet Module

        *pucRegBuffer++ = ( unsigned char )( gusFastOutputParameter1 >> 8 );
        *pucRegBuffer++ = ( unsigned char )( gusFastOutputParameter1 & 0xFF );


Problem: I do not want when TaskB is reading gusFastOutputParameter1 and in between TaskA writes to gusFastOutputParameter1.
gusFastOutputParameter1 is 2 byte data but in actual application i have 128 bytes of global data.

Please give your thought on this how i can avoid data corruption using FreeRTOS features(semaphore,mutex,queues or taskENTER_CRITICAL()) which low latency.

richard_damon wrote on Monday, May 30, 2016:

There are a couple of choices depending on what you are needing to do (especially how long it might take) and how much you can perterb the rest of the system.

Normally, the quickest is to wrap the access in a taskENTER_CRITICAL()/taskEXIT_CRITICAL() block. This typically costs just a few instructions so very low latency. The problem is that you need to have to contents of the block be very quick, as it holds off EVERYTHING in the system, including interrupts. (how much is too much is a function of your hard real time needs, copying 128 bytes might or might not be acceptable).

The least invasive approach is to wrap the access with a Mutex or Semaphore which you take before and give after. This is still reasonably fast (but there is some delay from the code), but only affects those tasks trying to use the Mutex/Semaphore. Semaphores are a bit simpler so quicker, but don’t handle the issue of priority inversion.

savindra wrote on Tuesday, May 31, 2016:

Thanks for your reply.

As i study the difference between semaphore and mutex, mutex can do the mutual exclution very well and it include a priority inheritance mechanism.
As you said i must first find out how much time is needed to copying 128 bytes and how it affects the high priority task.

Thanks once again.

richard_damon wrote on Tuesday, May 31, 2016:

Yes, the Mutex is the primative designed for this operation. The Semaphore is a simpler and more general tool with less restrictions on its use, but doesn’t provide some features of the Mutex. (For a basic pure mutual exclusion operation, use the Mutex if possible).

The Critical Section works as a very light weight operation for very fast accesses, Moving 128 bytes of data feels to me on the edge of this space, It might be appropriate if this was part of a high speed loop, and the mutex operation was affecting performance, but it doesn’t sound like this is the case here, so I would go with the Mutex. At a 10ms rate, the Mutex overhead shouldn’t be that significant.

The overhead of the critical section isn’t so much between these two tasks (the time of the operation will always be a potential delay for the high priority task, that is an essential part of mutual exclusion), but to other unrelated tasks. The Critical Section holds off ALL other operations, irregardless of if there is a dependency or not, its global nature is what gives it the faster operation.

savindra wrote on Thursday, June 02, 2016:

Hello Richard,

After going through different methods for data protection in mutithreaded environment.
I can have mutext & semaphore.

I have 128 bytes of data but data is structure in 8 bytes of structure each.
and reader tasks may be reading only 8-16 bytes of data at each cycle.

Idea 1:
What i am thinking is i will give a lock with each 8 byte data structure so whoever task accessing that memory location will get a lock.

But one issue comes to me is some of those 128 bytes i am updating from an ISR so waiting in ISR may not be the good idea.

Idea 2
I am also think about double buffering . When writing task finished with some data it can write to the buffer and that buffer will be used only for reading.

I am lost here every method has its own advantage and disadvantage .
I am not sure which method is perfect for me

hs2sf wrote on Thursday, June 02, 2016:

I’d recommend Idea 2 to minimize the need of locking. Be aware locking isn’t for free and incurs serious overhead.
Depending on your overall system you probably need to deal with (double) buffer overflows while debugging.
Good luck !

richard_damon wrote on Friday, June 03, 2016:

An ISR can NOT ‘wait’ for a mutex, so you can’t use a mutex for exclusion with an ISR. For that you either need to use a critical section, or the ISR sends the data to a task and lets the task use the mutex.

Also, if you are only updating 8-16 bytes, this is probably small enough that the latency of a critical section is acceptable.