AWS has provided sample applications for AWS IoT features in this repo. I am trying to run telemetry, device shadow & jobs applications in multiple threads with same priority. I am creating server connection and mutual authentication as common before creating threads for applications as below,
I have declared structure variable for socket as global and same socket connection i am trying to use in different threads, but i couldn’t achieve want i want to i.e., running telemetry, device shadow & jobs applications in different threads parallelly. Below is screenshot of threads created after server mutual authentication.
It is unclear where you see or experience a problem. What exactly is going wrong? Do you experience faults?
Re the general question of trying to do I/O to the same peer end point from multiple threads: Regardless of whether the middleware supports it (most network stacks do not), the underlying protocol must support it, ie allow random interspersed sequences of messages. I have never seen such a protocol and doubt a usefulness of something like that. At the very least, you would need to serialize messages and their respective acknowledgments via Muteces and state machines, but my suspicion is that you would gain nothing from such an architecture. Distinct end points for each stream is a much less error prone and natural way to tackle multithreading on the protocol level.
I have done socket connection and TLS authentication with server first as AWS accepts only mutually authenticated client connections. After this i created tw threads with same priority, one for sending telemetry data and other one device shadow feature demo. Shadow application thread starts first, but once second thread starts running after doing few steps in second thread code, socket connection in first thread is getting closed and program hangs there.
I tried to use mutex for each operation in both threads, but the second thread is not able to take mutex at all…
Which TCP/IP stack are you using? Probably either lwIP or FreeRTOS+TCP.
There are two ways of using the MQTT library. You can use coreMQTT directly, or you can use the MQTT agent. The agent adds thread awareness to coreMQTT. So, if you call coreMQTT APIs directly you need to take appropriate precautions to ensure it is not accessed from more than one thread at a time, whereas if you use only the MQTT agent APIs then any number of threads can share the MQTT connection.
Thanks @rtel Richard for reply, i am using FreeRTOS+TCP.
What can be the best way to do synchronisation as both threads are communicating over same socket connection. Once second thread takes mutex socket connection is getting closed and and then there is not communication between client and server. Below is the screenshot of logs once the second thread takes mutex.
I would advise against using the same socket for both kind of data. Firstly, it is very hard to debug when interspersed data is sent; Secondly, if one thread hangs/fails it mostly would affect the other thread too when it tries to send data.
I would suggest that you create 2 sockets. Mutually authenticate both of the sockets with the AWS sever. And then use 1 with the telemetry data and the 2nd for device shadow. That way, if one socket connection is closed for some reason, then the other one will continue to work.
The pseudo code should look something like:
socket1 = FreeRTOS_Socket( /* Params to create TCP socket */ );
socket2 = FreeRTOS_Socket( /* Params to create TCP socket */ );
/* Create a TLS connection with the broker using the first socket - will be used for
* telemetry. */
SecureSocketsTransport_Connect( socket1, TelemetryPort );
/* Create a TLS connection with the broker using the second socket - will be used for
* device shadow. */
SecureSocketsTransport_Connect( socket1, DeviceShadowPort );
/* Send telemetry data over first socket. You can put this function in a separate
* thread too! */
TelemetryProcessingFunction( socket1 );
/* Send device shadow data over the second socket. You can put this function in a
* separate thread too! */
DeviceShadowProcessingFunction( socket1 );
Is there a particular reason why you would like to use the same socket for both things? I would like to understand the use case that you are trying to solve - as there might be a simpler solution for it.
Also, if you can attach a Wireshark log, it might be helpful. Although since you are using TLS, it will only show any obvious errors - we cannot look for flags and internal data in a TLS encrypted packet.
Requirement is to implement multithreading for all AWS IoT features demo applications so that we can see all features apps running in parallel. But our requirement changed now, so we are not going to implement multithreading. Your suggestions would really help me to clarify my doubts.
One thing i want to get clarified, can we communicate to a server from multiple threads over a single socket connection established, if all threads are reading to & writing from same server over single TCP socket.
The link @RAc posted above should help you with your question.
To elaborate on the post description:
If you have 2 tasks out of which one is sending (using FreeRTOS_send) and the other is receiving (using FreeRTOS_recv) then it would work absolutely fine. Both tasks will function well and send and receive data as they are supposed to.
But, if you have 2 (or more than one) tasks trying to call FreeRTOS_send concurrently using the same socket, then it won’t work as the data will be interspersed.
For example, if 1 task is sending “ABCDEFGH” and the 2nd task is sending “12345678”, there is no guarantee that the server will receive “ABCDEFGH” followed by “12345678”. The server might end up receiving “ABC1234DE5F678FGH” (or any other permutation).
Thus, it is not advisable to call send from multiple tasks at once.
Similarly, if you have 2 (or more than one) tasks trying to call FreeRTOS_recv concurrently using the same socket, then it won’t work as the data will be interspersed.
For example, if the server is sending “ABCDEFGH12345678” (where “ABCDEFGH” is meant for 1st task and “12345678” is meant for 2nd task), there is no guarantee that both tasks will receive the data correctly (unless you jump through many hoops).
Thus, it is not advisable to call recv from multiple tasks at once.