Considers when migrating from preemption to cooperative

I have developed a program that periodically reads some input, does some processing, and writes some output. Plain old global variables are used for sharing information among tasks.

Recently I’ve learned about the data race UB in C11, and I realize the program is full of data races.

To solve the data race problem, I can define the global variables as atomic or use mutexes to protect the global variables. But that seems a lot of work.

I am wondering whether the data race can be solved by simply defining configUSE_PREEMPTION 0 instead of the original 1. Is there something I should watch out if configUSE_PREEMPTION is changed to 0? Thanks very much.

The basic idea of the program:

struct Input1 { int someData[50]; /*...*/ } g_input1;
struct Input2 { int someData[50]; /*...*/ } g_input2;
struct Output1 { int someData[50]; /*...*/ } g_output1;
struct Output2 { int someData[50]; /*...*/ } g_output2;

// task bodies, run every step, non-blocking
void GetInput1(struct Input1 *);
void GetInput2(struct Input2 *);
void Process(const struct Input1 *, const struct Input2 *, struct Output1 *, struct Output2 *);
void SetOutput1(const struct Output1 *);
void SetOutput2(const struct Output2 *);

#define PERIOD pdMS_TO_TICKS(10)

void InputTask1(void *arg) {
  for (;;) {
    GetInput1(&g_input1);
    vTaskDelay(PERIOD);
  }
}
void InputTask2(void *arg) {
  for (;;) {
    GetInput2(&g_input2);
    vTaskDelay(PERIOD);
  }
}
void ProcessTask(void *arg) {
  for (;;) {
    // consume input and produce output
    Process(&g_input1, &g_input2, &g_output1, &g_output2);
    vTaskDelay(PERIOD);
  }
}
void OutputTask1(void *arg) {
  for (;;) {
    SetOutput1(&g_output1);
    vTaskDelay(PERIOD);
  }
}
void OutputTask2(void *arg) {
  for (;;) {
    SetOutput2(&g_output2);
    vTaskDelay(PERIOD);
  }
}
int main(void) {
  // same priority, task order doesn't really matter
  xTaskCreate(InputTask1 /*...*/);
  xTaskCreate(InputTask2 /*...*/);
  xTaskCreate(ProcessTask /*...*/);
  xTaskCreate(OutputTask1 /*...*/);
  xTaskCreate(OutputTask2 /*...*/);
  vTaskStartScheduler();
}

Atomic won’t help, as atomic are of specific types, you can’t make a structure “atomic” (as I understand the header)

Presumably, you want to first run the “Input” tasks, then the “Process” task, then the “Output” tasks, and in that order, with the two inputs running in parallel, as with the two outputs, and not have a possibility of just one input being updated, then process then the other input updating after the processing.

If that IS true, then using just a bit to task to task signaling will work to handle the work. The input tasks each do their work, and set a different bit in the process task, then delay, and then perhaps wait for.a signal back from the process before getting the next input.

The process task waits to be signaled by the two input tasks, then processes the inputs, then signals the two output tasks and the two input tasks signaling the output buffers are ready and the input buffers are free. It then waits for the output tasks to signal back, then goes back to waiting for the input tasks to give the next buffer.

The output tasks also wait to be signal by the process task, and then do their work and when done signal the process task the buffers are ready for the next data.

No data races as every buffer has been protected by a token passed back and forth via the signaling indicating which task “owns” the buffer.

using configUSE_PREEMPTION of 0 won’t make sure both input tasks complete before the process task runs, and trying to rely on it means the input tasks can’t use “blocking” I/O to gather their data which is probably unreasonable.

Do you need to have separate tasks for input, process and output? Can you not do that all in one or two tasks like the following:

void Task1( void * params )
{
    Get input 1.
    Process input 1.
    Output input 1.
}


void Task2( void * params )
{
    Get input 2.
    Process input 2.
    Output input 2.
}

There is no indication that output1 depends only on input1.

Also, there may need to be an overlap in getting the input and sending the output. (Lots of details missing in the example to think about optimizations)