Issues with FreeRTOS_closesocket

I see a couple issues with FreeRTOS_closesocket.

The first issue I see here. The zero time-out is unnecessary and
the comment is misleading:

because in xSendEventStructToIPTask we find this time-out overridden when called
from the IP-Task:

A time-out of portMAX_DELAY would be better here,
especially since nobody is expecting to handle the return value of FreeRTOS_closesocket.

Which leads to issue number two.

That is when called from the IP-Task it is necessary to handle the return value:

But both calls to FreeRTOS_closesocket from FreeRTOS_TCP_IP.c fail to
handle the return value from FreeRTOS_closesocket, and therefore may cause
memory leaks.

Note, that I have a test case that triggers the closesocket in vTCPStateChange,
while the closesocket in prvTCPSocketCopy is believed to be unreachable.
I can share the test case with you if you want.

A similar issue may happen when a listening socket is closed with FreeRTOS_closesocket
while there are still pending accepted sockets with u.xTCP.bits.bPassQueued or
u.xTCP.bits.bPassAccept set, these sockets can neither be closed nor accepted any more.

Note however that when I try to create test case for this I usually do not see a memory
leak but instead I see a crash, because when I close the socket I also delete the
signal set, with FreeRTOS_DeleteSocketSet, but while the FreeRTOS_closesocket
runs in the context of the IP-Task the SocketSet_t is deleted right away, when the
socket may still be holding a reference to the Socket Set, which gets signalled once
the PassAccept bit gets set.

A similar race may of course also happen with UDP sockets, when the socket is
deleted and a packet is received at the same time.

A first quick response:

You wrote:

because in xSendEventStructToIPTask we find this time-out overridden when called
from the IP-Task:

Yes you are right, thanks for noticing this!

The xIsCallingFromIPTask() test in xSendEventStructToIPTask() is newer than the code in FreeRTOS_closesocket(), and the comment should have been updated accordingly.

Within FreeRTOS_TCP_IP.c, the problem could be solved by calling vSocketClose() directly.

My attitude about the message queue of the IP-task has always been: it should never get full.
In vPrintResourceStats() the minimum space in the queue is checked by calling uxGetMinimumIPQueueSpace(). In my projects I like to see space for at least 10 more messages.

Note that this “queue check” is only done when ipconfigCHECK_IP_QUEUE_SPACE is defined.

ipconfigEVENT_QUEUE_LENGTH sets the size of the IP-task queue. It s normally defined as:

#define ipconfigEVENT_QUEUE_LENGTH  ( ipconfigNUM_NETWORK_BUFFER_DESCRIPTORS + 5 )

Note that ipconfigEVENT_QUEUE_LENGTH will be multiplied by 8 bytes sizeof(IPStackEvent_t) to get the physical size of the queue. That is not too expensive.

A similar issue may happen when a listening socket is closed with FreeRTOS_closesocket() while there are still pending accepted sockets with u.xTCP.bits.bPassQueued or u.xTCP.bits.bPassAccept set, these sockets can neither be closed nor accepted any more.

It is a bit unexpected to close a listening socket while receiving new connections, but indeed possible. And yes, you would get orphaned sockets.

There is one protection against this, anti-hang protection:

/* As an attack surface reduction for ports that listen for inbound
 * connections, hang protection can help reduce the impact of SYN floods. */
#ifndef ipconfigTCP_HANG_PROTECTION
	#define ipconfigTCP_HANG_PROTECTION    1
#endif

/* Non-activity timeout is expressed in seconds. */
#ifndef ipconfigTCP_HANG_PROTECTION_TIME
	#define ipconfigTCP_HANG_PROTECTION_TIME    30
#endif

About FreeRTOS_DeleteSocketSet() : yes it wouldn’t hurt if the actual deleting of the socket set is also done by the IP-task to avoid race conditions.

Here above, there is the word “SYN floods” in a comment. A better protection against TCP floods is to limit the number of possible child sockets, by setting the parameter xBacklog in FreeRTOS_listen():

BaseType_t FreeRTOS_listen( Socket_t xSocket, BaseType_t xBacklog );

When it is set to 1, only a single client socket will be created. Only when that socket is closed, a new connection can be accepted.

In order to protect against UDP floods, the following macro can be set:

/* Make positive to define the maximum number of packets which will be buffered
 * for each UDP socket.
 * Can be overridden with the socket option FREERTOS_SO_UDP_MAX_RX_PACKETS
 */
#define ipconfigUDP_MAX_RX_PACKETS    10U

When set at 10, at most 10 UDP packets will be stored in a socket. The socket owner is responsible for processing the messages quick enough to avoid dropped packets.

Thanks so far, tomorrow more thoughts.