I need to send some data over the ethernet - about 50 packages for a minute, but the exact timestamp for each data is random. Should I rather:
a) create a single task, that runs forever and waits for a semaphore to send data over the ethernet as soon as the data is ready?
b) design a task that serves a single purpose of sending data over the ethernet only once, and start this task from inside some other task with xTaskCreate(), only when there is ready data to be sent?(this task would delete itself after having the data sent)?
As to my understanding:
Solution a) would take up some resources for the task, as it would be most of the time in the ‘ready’ state.
Solution b) seems to be more elegant, however, I am somehow worried that creating and deleting a task every second may not work…
Furthermore - it is a good practice to create a task without an infinite loop, eg a task that just calls few functions and then deletes itself?
Thank you very much for any ideas and comments.
How many times do the 50 packets need to be sent? Just once? Or is this something you are going to have to repeat over and over. If you have to do it over and over then making the task persistent (i.e. not deleting and recreating it) would seem to be the way to go to ensure the resources required by the task are always available. However…
Why do you need to send the packets from a separate task at all? Can you just do it inline from the original task?
just once - this data will be different each time.
I would prefer to keep things separate - the data is supposed to be CAN messages’ IDs, and I would prefer to have a separate task for CAN functionality and other tasks do different stuff. Reason is that the CAN functionality is of higher priority than sending the data over the ethernet.
If you are worried about the resources being used by the task being there even when not running, you need to ask yourself can you be sure that they will be there when you need to start the task to send the message, and if not, do you have a recovery strategy? Also, one danger of creating and destroying the task regularly is that every time you release the memory and then need to reaquire it, there is a possibility of fragmenting the free space on the heap, and in certain pathological cases, you can fragment it so much that even though there is plenty of total space left, there is no one chunk big enough to meet the current need.
My general attitude is that unless there is a real need to dynamically share memory between operations, it is best to just create static assignments at startup so you know there is enough for everything needed. Only when there isn’t enough memory to give everything the memory it might need does it make sense to go to a dynamic allocation, and then you need to take care to handle the possible out of memory errors.
that is a good point. Thanks for the advice.