Task 1: higher priority, frequent and performing a calculation to get X
Task 2: lower priority, infrequent and needs to work on the latest value of X (low latency)
I hoped a queue size of one, possibly “sending to front” on each task1 iteration and ignoring the overflow would then have the value ready for when task2 needed it.
BUT from tests running a constantly overflowing one place queue doesn’t yield the same value as a long* queue and draining it to the last! (*jigged to 100 for the test but not practical for target situation)
I cannot find a explicit statement of what would happen in the overflow situation, as all the text/examples avoid this.
Thanks for your consideration and I was wondering is there an elegant solution?