Hello everyone,
I’m working on custom zynq board that includes 64GB emmc. I’ve encountered an issue where if I write more than 272KB of data at once, an unexpected extra byte appears at offset 0x10000 hence data beyond this offset are shifted by one byte. Below is my test code for clarity:
#define WRITE_BUFFER_SIZE 500 * 1024u
#define READ_BUFFER_SIZE 500 * 1024u
uint8_t WriteBuffer[WRITE_BUFFER_SIZE];
uint8_t ReadBuffer[READ_BUFFER_SIZE];
uint8_t Count = 0;
for(int i = 0; i < WRITE_BUFFER_SIZE; i++){
WriteBuffer[i] = i++;
}
pCurrentFile = ff_fopen(CurrentFilePath, "w+");
size_t bytes_written = ff_fwrite((const void*)WriteBuffer, sizeof(uint8_t), 1024 * 271, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);
pCurrentFile = ff_fopen(CurrentFilePath, "r+");
size_T bytes_read = ff_fread(ReadBuffer, sizeof(uint8_t), 1024 * 271, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);
After running the code, I downloaded the file to my PC then I check the data in the file by using Hex Editor Neo. For the 271KB file everything is expected no shifting or extra byte.
271KB file:
271KB.zip (1.6 KB)
Hex Editor Screen Shot:
When I increase the write and read size to 272KB, the issue appears.
The code for writing 272KB :
#define WRITE_BUFFER_SIZE 500 * 1024u
#define READ_BUFFER_SIZE 500 * 1024u
uint8_t WriteBuffer[WRITE_BUFFER_SIZE];
uint8_t ReadBuffer[READ_BUFFER_SIZE];
uint8_t Count = 0;
for(int i = 0; i < WRITE_BUFFER_SIZE; i++){
WriteBuffer[i] = i++;
}
pCurrentFile = ff_fopen(CurrentFilePath, "w+");
size_t bytes_written = ff_fwrite((const void*)WriteBuffer, sizeof(uint8_t), 1024 * 272, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);
pCurrentFile = ff_fopen(CurrentFilePath, "r+");
size_T bytes_read = ff_fread(ReadBuffer, sizeof(uint8_t), 1024 * 272, pCurrentFile);
fs_ret = ff_fclose(pCurrentFile);
Again I download the file to PC and check with Hex Editor. I saw extra 0xFF byte at the address 0x10000. After that address all data addresses are shifted one.
272KB file:
272KB.zip (1.6 KB)
Hex Editor Screen Shot:
To ensure the integrity of the data, I also dump the RAM after reading the file and compare the dumped file with the one downloaded using FileZilla. Both files are identical.
What could be causing this extra 0xFF byte? Does the ff_fwrite()
function have a limitation on the size of data it can write in one go?
Thanks for your help.