LWIP + FreeRTOS socket sharing error

alweib wrote on Thursday, April 26, 2018:

Hey everyone,

I’m having an issue sharing a TCP socket between two tasks in FreeRTOS (v. 7.0) and LWIP(v. 1.40).

I’m using the socket for a cmd / reply interface and also to send information back asynchronously. These 2 ways of writing to the socket are of course in two different tasks. I’m usign a semaphore to control access to the socket:

bool wdpshell_write(void* data,int l){
if(pxNewWdpConnection != NULL)
xSemaphoreTake( xSemaphoreWdpBlock, portMAX_DELAY);
bool err = (ERR_OK == netconn_write(pxNewWdpConnection, data, l, NETCONN_COPY ));
xSemaphoreGive( xSemaphoreWdpBlock );
return err;
return false;

It works fine when only the cmd / reply or the async sends info back, without the other running, but when both run i lose the connection. Setting in a delay (vTaskDelay( webSHORT_DELAY ):wink: after the semaphore take cmd, it seems to work. Can anyone explain this behaviour? It would be very detrimental for performance if there should be a delay for all traffic in that direction.

Thanks in advance.


rtel wrote on Thursday, April 26, 2018:

I’m afraid I am not familiar enough with the internal workings of lwIP
to say - it is a long time since I’ve used it as we have our own TCP/IP
stack implementation.

As I recall there were restrictions on using sockets between different
threads, even when using the sockets interface in place of the netcon
interface, and there were various configuration options that had to be
set (NO_SYS maybe?).

Maybe somebody else on the forum is familiar with lwIP? Otherwise the
lwIP mailing list might get better answers.

alweib wrote on Thursday, April 26, 2018:

I’ll try to take it to lwIP, Richard. Thanks for the reply. If i learn anything over there, i’ll bring back the solution. Meanwhile, any other suggestions are much appreciated.

alweib wrote on Thursday, April 26, 2018:

Does anybody know where you’d post questions about lwIP? I can only find a community on Savannah, but that seems to be mostly bug listings, and this wouldn’t be appropriate to post as a bug.

Thanks in advance.

heinbali01 wrote on Thursday, April 26, 2018:

Hi Aleksander, neither I am an expert on lwIP. I don’t know why netconn_write() can not be called from two tasks, when protected by a semaphore.

Does anybody know where you’d post questions about lwIP?
I can only find a community on Savannah

I’m not sure which lwIP forum is most helpful, but did you try this one ?

Within FreeRTOS+TCP, it is explicitly allowed to share a socket between two tasks: one may write, and the other may read. No semaphore ( mutex ) is needed to do so.
But when writing ( or reading ) to the same socket from two tasks, the socket must be protected with a mutex.

I think that under Linux or Windows, you will meet the same limitations.

But if I were you, I would invest time and energy in finding a way to keep a socket within a single task. If the return data comes from another task, let the socket-owning task gather that data and send it.

And more: if you use the select() function, a single task can handle multiple sockets.

B.t.w. you can download FreeRTOS+TCP from here:


And if you’re looking for demo application using FreeRTOS+TCP:


Note that that ZIP file contains an older release of FreeRTOS+TCP.

alweib wrote on Thursday, April 26, 2018:

Thanks for the answer Hein. I do read from the socket from one task, while writing from the other, but as you say this should be allowed. I only have one function for writing to the socket, which should protect from multiple ‘concurrent’ uses.

As i noted in my first post, i use lwIP v. 1.40. Maybe there was an issue with this back then?

I will try to take the post to the forum you linked, and post here with what i find.

heinbali01 wrote on Thursday, April 26, 2018:

What CPU / platform are you using?

alweib wrote on Thursday, April 26, 2018:

The AVR32 microcontroller.

I just got some news from the lwip guys. They said that full duplex has never been supported in a stable released version (i.e., reading and writing from different tasks simultaneously). Are you of a different opinion Hein?

heinbali01 wrote on Friday, April 27, 2018:

Are you of a different opinion Hein?

The person who replied to you ( Simon Goldschmidt ) knows a lot more about lwIP than I do.

He mentioned an important thing: “but only recent […] versions support aborting waiting read threads on close”.

When sharing a socket between threads, the closure of the socket must be well synchronised. Only one task may actually close the socket, while the other keeps his hands off. Any ongoing API must be aborted before the socket can be closed.

As for FreeRTOS+TCP : blocking API’s like FreeRTOS_send(), FreeRTOS_recv(), and FreeRTOS_select() can be interrupted ( signalled ) by calling:

#if( ipconfigSUPPORT_SIGNALS != 0 )
    /* Send a signal to the task which reads from this socket. */
    BaseType_t FreeRTOS_SignalSocket( Socket_t xSocket );

There is also an ISR version : FreeRTOS_SignalSocketFromISR().

After the signal, any active API’s will return with an error -EINTR.

But, as I said earlier, I prefer not to share sockets among tasks. I know that you can not call both send() and recv() in a blocking way simultaneously. But there are many techniques to get around this, most importantly the use of select().

Here is a rough sketch, symbolically:

    for( ;; )
        /* Call select and sleep for N ms. */
        if( has_write_data() && scoket_has_tx_space() )
            write( s, buf, length, DONTWAIT );
        if( socket_has_rx_data() )
            read( s, buf, max_length, DONTWAIT );
        if( connection_was_closed() )
            /* Remove it from the select group. */
            FD_CLR( s, &event);
            close( s );

alweib wrote on Friday, April 27, 2018:

I actually don’t use the FreeRTOS+TCP version, but i’ll try to implement your pseudo-code anyway. I’ll just have to check if select is exclusively a FreeRTOS+TCP function.

Otherwise, maybe there’s a non_blocking receive call i can use - in that way everything can be synchronized in one task:

//pseudo code
//non blocking rec call
if (data = rec_data()) {
    reply = process_data(data);
 else if (status_data_to_write) {

heinbali01 wrote on Friday, April 27, 2018:

If you google the keywords “lwip select example”, you will find good examples. I’m not sure if it is all compatible with lwIP v1.40.
There is a lot of documentation about select(). It exists in every OS. It is a system call and it means:

    "Here is a set of handles (=sockets) and events (Read, Write, Exception)"
	"Please return when at least on event happens, or when a time-out occurs."

A Read event means that you may read.
Be careful with the Write event: switch it off when not needed, otherwise select() keeps on returning immediately saying: “You may write to socket x”.
The Exception event means that an error occurred.

A TCP server on a big (multi-core) host will typically fork() (create a new task) for every new client. In an embedded project, this is risky: you might run about of memory / resources.

I did write an FTP-server that uses select(): from a single task, hundreds of connections can be handled.
I tend to embed a socket (handle) in a structure that contains the complete information about that client.
After disconnection, this structure will be freed.

alweib wrote on Friday, April 27, 2018:

Ahh, that makes very much sense and seems applicable to my current problem. It looks like the version i use have something like this [ xQueueCreateSet() ]. Thanks again for the help, Hein!

heinbali01 wrote on Friday, April 27, 2018:

xQueueCreateSet() is a FreeRTOS function, which is usefull if you wait for several queues.
I was thinking of lwip_select() as in this example

alweib wrote on Monday, April 30, 2018:

To anyone interested, i ended up using:
netconn_new_with_callback(NETCONN_TCP, callbackFnc)
and then before receive call i’d have a semaphore:
xSemaphoreTake( xSemaphoreRecBlock, portMAX_DELAY);
which is given by callbackFnc. After that it takes semaphore also used for write (see first post), and gives it after receive call is finished.