can anybody confirm that FreeRTOS_send() and FreeRTOS_recv() can
be used concurrently (i.e., from different tasks) on the same socket?
This is possible with the BSD interface, I’m just double-checking that
the same semantics apply here.
Hi Gerhard, yes, the library has been made that way: one task may call send(), while the other task calls recv(). That is perfectly safe.
Just make sure that you synchronise the closure event among the tasks: once you get a negative result that is unequal to -pdFREERTOS_ERRNO_EAGAIN, the connection has become invalid and the socket must be closed by calling FreeRTOS_closesocket().
Once the socket is closed, the other task may not use it any more, or the penalty will be that the application will crash.
In an embedded application, when resources are scarce, it would be preferable to handle both send() and recv() from within the same task.
Or even better, handle a whole set of sockets from a single task.
We have given an example of doing so in the HTTP and FTP protocols, which you can find in FreeRTOS-Plus-TCP/protocols.
That driver has a main call FreeRTOS_TCPServerWork(), which will call FreeRTOS_select().
The function select(), as you know, waits for any event that happens on a collection of handles (sockets, in this case). This may be a read, a write or an error event.
What is the estimated resource usage of a task to handle the receive loop? Data
input is pretty rare in our application (basically, just parameter changes). If it’s not
more than a couple 100 bytes, we’ll rather favor clarity.
After this coupling, the TCP/IP driver will give to the semaphore at each moment of interest:
● After sending a packet ( and TX space becomes available )
● When new data has been received
● When something happens to the connection ( gain, lost )
The main loop can simply block on the semaphore. When it wakes up, all sockets will be checked in a non-blocking way.
OK. I looked at the stack usage, it is ca. 1k in our case.
This is largely dwarfed by buffer sizes. Do you have suggestions to optimize those in a memory constrained environment?
I just reduced our value of ipconfigNUM_NETWORK_BUFFER_DESCRIPTORS from 20 to 10, that gains us ca. 15k and appears to work just fine. Any other suggestions? The TCP stack seems to allocate a lot (ca. 37k) on the heap.
Not quite clear where that’s coming from, and what could further be tuned.
Performance is not important for our application, but we are a bit memory constrained (LPC1837,
total ca. 100k). It will eventually send a few hundred bytes / s over a single TCP connection and occasionally (every few minutes) receive a settings command (couple 100). CPU load is not an issue.