FreeRTOS + FAT ff_fwrite() issue

Hello everyone,
I’m working on custom zynq board that includes 64GB emmc. I’ve encountered an issue where if I write more than 272KB of data at once, an unexpected extra byte appears at offset 0x10000 hence data beyond this offset are shifted by one byte. Below is my test code for clarity:

#define WRITE_BUFFER_SIZE 500 * 1024u
#define READ_BUFFER_SIZE 500 * 1024u

uint8_t WriteBuffer[WRITE_BUFFER_SIZE];
uint8_t ReadBuffer[READ_BUFFER_SIZE];
uint8_t Count = 0;

for(int i = 0; i < WRITE_BUFFER_SIZE; i++){
    WriteBuffer[i] = i++;
}

pCurrentFile = ff_fopen(CurrentFilePath, "w+");
size_t bytes_written = ff_fwrite((const void*)WriteBuffer, sizeof(uint8_t), 1024 * 271, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);

pCurrentFile = ff_fopen(CurrentFilePath, "r+");
size_T bytes_read = ff_fread(ReadBuffer, sizeof(uint8_t), 1024 * 271, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);


After running the code, I downloaded the file to my PC then I check the data in the file by using Hex Editor Neo. For the 271KB file everything is expected no shifting or extra byte.
271KB file:
271KB.zip (1.6 KB)

Hex Editor Screen Shot:

When I increase the write and read size to 272KB, the issue appears.

The code for writing 272KB :

#define WRITE_BUFFER_SIZE 500 * 1024u
#define READ_BUFFER_SIZE 500 * 1024u

uint8_t WriteBuffer[WRITE_BUFFER_SIZE];
uint8_t ReadBuffer[READ_BUFFER_SIZE];
uint8_t Count = 0;

for(int i = 0; i < WRITE_BUFFER_SIZE; i++){
    WriteBuffer[i] = i++;
}

pCurrentFile = ff_fopen(CurrentFilePath, "w+");
size_t bytes_written = ff_fwrite((const void*)WriteBuffer, sizeof(uint8_t), 1024 * 272, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);

pCurrentFile = ff_fopen(CurrentFilePath, "r+");
size_T bytes_read = ff_fread(ReadBuffer, sizeof(uint8_t), 1024 * 272, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);


Again I download the file to PC and check with Hex Editor. I saw extra 0xFF byte at the address 0x10000. After that address all data addresses are shifted one.
272KB file:
272KB.zip (1.6 KB)

Hex Editor Screen Shot:

To ensure the integrity of the data, I also dump the RAM after reading the file and compare the dumped file with the one downloaded using FileZilla. Both files are identical.

What could be causing this extra 0xFF byte? Does the ff_fwrite() function have a limitation on the size of data it can write in one go?

Thanks for your help.

I commonly write files of up to 4095 MiB with FreeRTOS + FAT ff_fwrite() and read them back, verifying the data (see big_file_test.c), without problems.

I recommend testing your Media Driver by porting ChaN’s Compatibility Checker for Storage Device Control Module (one of my ports: app4-IO_module_function_checker.c). It bypasses the file system entirely and drives the block device driver (Media Driver) interface directly. This makes driver testing and debug much simpler. However, you will have to reformat your medium after running it.

Hi Carl, thanks for you reply.
Before the porting ChaN’s module I have checked your code and I noticed that your test code call the write API like that size_t bw = ff_fwrite(buff, 1, BUFFSZ, file_p);. Then your code keep calling it until it reaches the value of the size parameter. If I follow your way, yes it works for me.

But I don’t know is it the correct way (if it is not suitable any reason please inform me I have no idea about that), I want to write big chunk of data to the file by calling just one write API like size_t bytes_written = ff_fwrite((const void*)WriteBuffer, sizeof(uint8_t), 512000, pCurrentFile);. If I use ff_fwrite() like that, the issue that I mentioned above occures.

Could you test your system after the increasing the xItems parameter around the 512000 if it is make sense?

I don’t have any MCUs that have enough SRAM for a write buffer that large. For example, the Raspberry Pi Pico only has 264kB of SRAM.

I got it. Thanks for helping. I would update the post if I found something.