I undertood. I will wait for the patch for x_emacpsif_dma.c
Thank you so much!
This Monday I will be at home again where I have my Zybo board for testing. If youâre in a hurry, please use the spin-loop for now. Hein
Please have a look at HT zynq synchronse transmission (PR #1283).
Could you also test the solution?
Thanks,
Hein
Hi @htibosch
Thank you for the fix.
The fix looks good with my previous application.
However, we also connected another application to the same tcp server on different port (100). When I ran this application, there were a lot of error messages.
This application sends a message to the tcp server periodically. Here is the pcap log.
pcap_log.zip (3.2 MB)
Even though the error messages were displayed, the tcp communication can still work fine. Could you please investigate it? If you need more information, please let me know. Thank you
Thanks for testing again.
I tried the patch by starting iperf, sending data from the DUT to my laptop:
iperf3 -c 192.168.2.127 -4 --port 5001 --bytes 1G -R
and that goes well.
Could you attach the C-code and any Python script that you used in the last experiment?
It looks like some high-priority task is holding up the IP-task.
Or maybe you can try the iperf3 command (if that is not too much work)?
Can you show a list of priorities:
- ipconfigIP_TASK_PRIORITY
- niEMAC_HANDLER_TASK_PRIORITY
- The priority of other tasks involved
And maybe also send a copy of your FreeRTOSIPConfig.h?
I my own tests, the new mutex can always be taken within microseconds.
Sorry, I am on my summer vacation. I will do your request on next Monday.
FreeRTOSIPConfig.h (21.3 KB)
FreeRTOSConfig.h (14.3 KB)
I attached the config files.
Basically, there are a lot ot tasks in our application. Below is the priorities of them
Task 1: 8 (highest priority)
Task 2: 7
Task 3: 6
niEMAC_HANDLER_TASK_PRIORITY: 5
ipconfigIP_TASK_PRIORITY: 4
Could you attach the C-code and any Python script that you used in the last experiment?
I am testing with our private application which has a complicated logic. I will try if I can write a simple C-code project to replicate the issue.
One more information is the tcp client fetches the data from the application every 100ms and the application sends back about 400256 bytes data to the tcp client.
Thank you for this information.
Basically, there are a lot ot tasks in our application. Below is the priorities of them
Task 1: 8 (highest priority)
Task 2: 7
Task 3: 6
niEMAC_HANDLER_TASK_PRIORITY: 5
ipconfigIP_TASK_PRIORITY: 4
Normally, we keep this list of priorities:
- Highest: EMAC-handler
- High: IP-task
- Normal: all tasks that make use of the IP-task
All other tasks are free to choose, although care must be taken that the CPU wonât not be kept busy for longer periods of time.
In FreeRTOSConfig.h:
#define configCPU_CLOCK_HZ 100000000 // 100 Mhz
#define configTICK_RATE_HZ 40000 // 25 usec
I wouldnât be happy with such a fast clock tick. I would rather see a relaxed clock-tick of 1000 Hz, and implement the real-time things in ISRâs from a Clock/Timer peripheral. But I donât know the details of your project.
Clock/Timers also provide a perfect measurement of time, eg. here. It returns the 64-bit time in usec.
I will try to do the same kind of transport within a testing environment.
Basically, we have tasks to run up to 100us periodically and these tasks are more important than the tcp tasks that why they are higher priorities.
I am also trying to modify your sample code to replicate the issue, but I havenât been successful yet. I will inform you once I can.â
Hi @htibosch
Please try to replicate the issue with these modified files.
Please increase the heap size since the data size 400256 bytes will be sent and enable the printing log to see the log as well. Thank you
test_server.py.h (530 Bytes)
test_server.c (10.8 KB)
Hi @htibosch
Were you able to replicate the issue with the modified files?
Hi @trantam_cdt92,
Sorry that it took a while. For a long time, I was searching in the wrong direction. Yesterday I found that the Python TCP driver also has âan issueâ.
Yes, I was able to get the same kind of unwanted pauses:
I made two important changes to the driver in [PR 1286]( Zynq driver: avoid race conditions by htibosch ¡ Pull Request #1286 ¡ FreeRTOS/FreeRTOS-Plus-TCP ¡ GitHub ).
Storage of the notification value should be atomic, and access to transmission descriptors should be protected with a mutex.
In your test, the Python script will send a short string âthank youâ, and then it expects 400,256 bytes in return. Most of the time it works well. When it goes wrong, the script is changing direction, it sends âthank youâ.
My Zynq sees an ACK of the last data block in frame number 0x1001
My Laptop sees a PSH and âthank youâ in frame number 0x1002.
The Zynq never sees 0x1002, and it receives âthank youâ a 100 ms later. After 200 ms, the TCP connection is âin syncâ again and communication goes on.
It looks like the Python app has sent both a pure ACK in 0x1001 as well âthank youâ in 0x1002.
Solution:
~~~diff
+ sleep(0.001);
s.sendall(b"thank you")
~~~
so that the script has time to deliver the last ACK and send the string in s PSH message.
If I had time I would like to write the client in C, and see if it has the same problem when switching direction.
Hi @htibosch
It seems that there is no issue with the last modification.
I will inform you if I detect any issues.
Thank you very much for your great effort.


