there are many post about this topic, but i can’t solve my problem.
The client(python…or whatever) :
connect to server, send request data and then shutdown SHUT_WR (EOF) to notify server the request is complete. Then it loop on socket.recv till server EOF (server shutdown) to get server reply. At the end it close it socket.
The server (mcu with FreeRTOS Plus IP)
server_listen_and_receive_task : create socket, binding, listen, and accepting(1 single connection at a time) connection. Then when connected loop on socket_recv blocking and wait for the client request till EOF (client shutdown). Then notify server_send_task to make a reply and wait send task notification to close the connected socket. Then back to accepting new connections
server_send_task : wait for server_listen_and_receive_task connection notification then send reply to client and at finish send shutdown. Then notify
server_listen_and_receive_task he can close the socket. Then back to begin.
after client shutdown server_send_task cannot write to socket because it is already unconnected (-128)
why do you have separate tasks if the protocol always enforces strictly linear sequences of client send request-server send response? That could and should be handled in one tasks.
Maybe a bad design, but when the client is connected i send also some events periodically (transition of statemachine), the server response take some seconds to shutdown. Even in this case 1 task should suffice.
But if i did correctly It shuld not be the problem.
The control flow looks ok unless there is something like a timeout on the client side that closes the socket prematurely or your task notifications are out of stream or your EOF markers are not disambiguated such that they can occur in a payload data stream.
There is also the issue of possibly not multithread safe usage of sockets, some network stacks like lwip simply do not support that. I do not know about FreeRTOS+.
To me it looks as if your protocol is not FD capable, so do yourself a favor and use a single thread to handle a transaction in sequence. One of the rules of multithreading is that too much of it is as bad as not enough of it.
edit: you may want to include a wireshark trace of a transaction.
I will try your suggestion it if it works.
The client should be ok, as i write a client/server in python and works ok (wireshark trace is as expected).
I will come back on a couple of day and include data and code
first task only read second task only write…this should be in FreeRTOS+ ok
But if you use notifications then you can not pass a payload between your tasks, but they must at least have access to the same socket descriptor. How do you synchronize the sockets between your tasks?
I use a queue to pass request to the server_send_task and i pass the socket descriptor and some data describing the request. Now I close the socket in the server_listen_and_receive_task. it should be better to close it in server_send_task.
server_listen_and_receive_task is notifyed to “continue” with tasks notifications.
As I said as first i will try your hint to put everything in a task
Can be problematic if i decide to accept more than one connection where more server_send_task are reading the send queue
what do you mean “more server_send_task”? In an off-the-shelf multiple connection architecture, you have exactly one thread per transaction that serves the transaction completly. There will be no need for inter task communication.
Hi @esr , I must admit that I have never heard of a “shutdown SHUT_WR (EOF) to notify server the request is complete”.
In the TCP communication that I have seen, the client sends a request, the server knows when the request is complete, and then the server replies.
So if possible, could you send a PCAP of the conversation between client and server.
You can zip the PCAP and attach it to your post, often by drag&drop.
@RAc I am little experienced in “correct architecture” so I appreciate your help. My understand and reason are the following:
- 1 single server_listen_and_receive_task dispatch only valid requests + socketdescriptor to a queue
- having 1 or more fixed number of tasks preparing the reply is good because I know exactly the amount resources i need. All i need ( task, task stack,queue, data used) is static allocated. Task are long living never destructed or created.
Is passing task context not a sort of inter task communication? I cannot think in my application to have tasks that are completely isolated from each other, At the end i need data which are shared
@htibosch I dont’know best practice, but i see the use of sending SHUT_WR in this interesting Link and was interested to implement something similar.
I can imagine an application when data are repeated fields in a protobuffer encoded message. You can stop decoding at EOF or at fixed size written at the beginning of the payload. Nice using EOF is it can be done at client side without knowing the message length at the beginning.
why do you think you need to “dispatch requests to a queue?”
The “stock” (note I do NOT put this as “correct”) solution is roughly like this:
- one thread (“listener”) sits on an accept() to wait for clients to connect.
- Whenever a client does connect, the listener creates a new “processor” thread (or selects one from an existing pool), passing the socket resulting from the accept (“worker socket”) to the processor thread and returns to the accepting state. No reading or writing to the worker socket in the listener thread.
- Each processor thread is the only instance to communicate over its worker socket, process the transaction and close the connection whenever appropriate. In the “request-response” world, the processor traverses a simple fsm which consists of the states “reading request,” “computing result” and “writing result.”
- In that setup, the only point where an IPC may be needed is when a processor has received and decoded a request and needs to possibly consult another thread for the response to propagate. Alternatively, the processors may query the system “inline” via a function call for the response, which may likely lead to mutual exclusion scenarios.
In that architecture, there is only one thread that deals with each worker socket. For most “single shot” transaction protocols such as http requests, it is hard to beat that scheme in terms of robustness and responsiveness. The only potential drawback here is that a server that is getting hit hard by frequent client requests may be subject to (intentional or unintentional) DOS attacks, exhausting the resources ar some point, in particular when computing the response can be CPU bound lenghty computations. In those cases, I/O completion ports are a very good and effective way to throttle the worker thread count, but FreeRTOS does not support those, which is not a problem for most practical embedded applications.
We are getting a bit offtopic…
First of all i am experimenting and NOT writing an http server! In my case the “real” amount of parallel tasks depend on the underliyng hardware.
- 1(at this time) up to 3 ADC tasks which is own fsm, they aquire, pause or stop depending on events which can be (GPIO inputs or tcp commands), aquired data can be read making requests.
for example (reset,start, and get for 3 seconds data)
- I want 1 single tcp server for all the ADC tasks.
- server_send_task is put inbetween ADC tasks and server
| --> _send_task 1 <-> ADCtask 1
server accept --> | --> _send_task 2 <-> ADCtask 2
| --> _send_task 3 <-> ADCtask 3
- I “belive”, having a queue for each task and dispatch (commands + socketdesc) to the right task is esier and cleaner to implement, command belonging to ADCx are implemented sequentially (can even sent_to_front if command has high priority)and don’t need any IPC between server_send_task or as you proposed between the “processor thread”
Please respond to me WHY i cannot pass around a socket descriptor and read and write in a sequential way(not concurrent)?
I never wrote or implied that you can not do that. It is just that if the transaction requires a strict sequential sequencing of reads and writes to a socket, there is no point in splitting that into separate tasks. You end up synchronizing both tasks so strictly that you may as well do the same thing in a single task as you do not benefit from interleaved multithreaded execution. You are in fact implementing what is sometimes called “coroutines.”
You can do that, but again, there are no benefits you gain from that architecture but a number of potential problems. Also, in my view, it obfuscates the control flow to do it that way. A sequential protocol execution is most naturally modelled by a sequential implementatin.
I believe that the problem you are seeing proves my point as you would not be in any danger of one thread closing a communication and thus surprising another thread if you encapsulated all accesses to the socket in a single thread.
But it is your architecture. I can only give you guidance from a few decades of experience in protocol implementation. If you still believe that you have a sound design - perhaps you do. But always remember the #1 mantra of software design: Always code as if the one who needs to maintain your code is a serial killer who knows your address.
I dont’belive…if it was i would not discuss about it!
i simply not understand concretely how would you do.
I did outline this a few contris above (the “stock solution.”).
But you write:
Your stock is not sequential and if i have more than 1 ADCtask is not sequential anymore…or it is?
The solution IS strictly sequential from the point of view of a host communication transaction. You have not explained how your multiple ADCs map to host transactions. Does a client request a reponse from a specific ADC, or any ADC? If there is a 1:1 mapping, it is even more so strictly sequential.
You need to be clear about who DRIVES what. In your host communication scheme, the control flow is driven by the clients: As long as no client connects, nothing happens, but as soon as a client connects, the client determines by its request how your target behaves. From your fragmentaric explanations, it seems to me as if the client(s) are “polling” the ADCs ver the network.
If you want your ADCs to trigger actions, you may want to invert the communication direction, ie let your target be the TCP client and initiate a communication actively as soon as an ADC has smething interesting to report (eg a significant change of reading).
Also, to me it is unclear what the nature of separate client requests is. A TCP server is designed to accept connections from separate end points concurrently, for example from multiple independent PCs. However, if all client requests all come sequentially from the same peer (such as in the “polling” case), you would not need any multithreading at all, just a server that traverses an infinite “accept-read-write-close” loop, as you will never receive a new client request while you process the current one. In that scenario, you could even leave the socket open and process all transactions sequentially over the same open connection (which has the additional benefit that you could also address security here by negotiating a session key for the duration of that cnnection and encrypting all traffic with that session key).
Here a python implementation of client and server and example transaction
pythonclientserver.zip (2.1 KB)