FreeRTOS + FAT: exFAT support?

I’ve got FreeRTOS + FAT working with SD cards. Testing with 8GB and 32GB cards works well. However, I just tried sticking in a new 128 GB SanDisk Ultra SDXC UHS-1 card, and the mount fails. It seems to get unhappy around here:

		/* Now we get the Partition sector. */
		pxBuffer = FF_GetBuffer( pxIOManager, pxPartition->ulBeginLBA, FF_MODE_READ );
...
		pxPartition->usBlkSize = FF_getShort( pxBuffer->pucBuffer, FF_FAT_BYTES_PER_SECTOR );
		if( ( ( pxPartition->usBlkSize % 512 ) != 0 ) || ( pxPartition->usBlkSize == 0 ) )
		{
			/* An error here should override the current error, as its likely fatal. */

where the usBlkSize is 0.

If I stick the card into my PC, Windows says the “File system” is “exFAT”. Does FreeRTOS+FAT support exFAT?

I presume that I could format this card as FAT32, and it would work, right? Unfortunately, my embedded system has no format capability at the moment, and it seems a lot to ask of an end user to somehow format big cards in a way that Windows does not support.

FreeRTOS+FAT is not going to work with a disk formatted as exFAT. I suggest reformatting the disk as FAT32. I think you can do that using the /FS:filesystem command line option to the Windows format command, sure you could do it in Linux too.

Yes indeed, FreeRTOS+FAT does not recognise exFAT ( yet )

Btw, the library can also initialise and format SD-cards. The biggest SD-card that I tested with was ( only ) 32 GB.

FreeRTOS+FAT is not going to work with a disk formatted as exFAT. I suggest reformatting the disk as FAT32. I think you can do that using the /FS:filesystem command line option to the Windows format command, sure you could do it in Linux too.

I tried a couple Windows commands first, and they would grind away for 1/2 hour or more, and then fail with an error message:

DISKPART> format fs=fat32
Virtual Disk Service error:
The volume size is too big.

or

format g: /fs:fat32
The volume is too big for FAT32.
Format failed.

I used this utility: http://www.ridgecrop.demon.co.uk/index.htm?guiformat.htm to format the 128 GB SD card and now it is working fine running under FreeRTOS+FAT. (And that utility is fast!)

I was able to mount it and read the files just fine with Windows 10.

Ah, nice; I will definitely have to look into that!

You wrote:

I used this utility to format the 128 GB SD card and
now it is working fine running under FreeRTOS+FAT.

Good to hear, and thanks for reporting this back. Note that FreeRTOS+TCP [edit](think this is meant to say FreeRTOS+FAT)[/edit] can not handle a partition that is bigger that 64 GB.

Btw, the library can also initialise and format SD-cards.

Ah, nice; I will definitely have to look into that!

These are the two functions you see in ff_format.h: FF_Partition and FF_Format.

In general, when working with SD-cards, I found it much very practical to use Linux tools (e.g. Ubuntu). It understands much more of partitions and formats, including FAT.
Also it can check and repair an SD-card with dosfsck /dev/sdc...

Do you have a reference for that? What’s the reason for that limitation? Does it apply to all operations? (In other words, if 128 GB card is already formatted 1 partition, FAT32, can FreeRTOS+FAT read and write files [each <= 4 GB in size] in that 128 GB partition?)

In general, when working with SD-cards, I found it much very practical to use Linux tools (e.g. Ubuntu). It understands much more of partitions and formats, including FAT.
Also it can check and repair an SD-card with dosfsck /dev/sdc...

Thanks for the tip. I’m using Cypress’ PSoC Creator, so I’m stuck on Windows, although I do run the Windows Subsystem for Linux. I should try accessing the SD card slot from WSL.

Do you have a reference for that?

I don’t.
I maintained FreeRTOS+FAT for six years and I never used a drive larger than 32 GB my self. Other people used it on 64 GB drives and had files larger than 2 GB. I once repaired ff_fseek() to interpret that offset parameter as unsigned in case SEEK_SET is used.

But please try it out on a multi-partition drive, and please report how it goes. As long as all addressing is in (512-byte) sectors it should be possible.

What’s the reason for that limitation?

I’m not sure if there will be byte-addressing or overflows somewhere. It’s just that I haven’t tested it with these larger drivers.

B.t.w. You might have seen that FreeRTOS+FAT has a kind of mounting, see ff_sys.h.
It very basic and simple:

/sdcard/
/ram/

The above directories in the root might represent two different file systems, located either on a card or a RAM drive.

When you create 4 I/O-managers for 4 partitions, there will be a common driver below for low-level SD-card access. When you use more that 1 partition simultaneously, it may need some protection from mutex against concurrent access.

But you will be able to represent them in a single tree:

/sdcard_p1/
/sdcard_p2/
/sdcard_p3/
/ ( The root will contain partition sdcard_p0 )

Linux dosfsck : I have a virtual Ubuntu in my Windows 10 laptop. That also works well.

Hein Tibosch wrote:

I maintained FreeRTOS+FAT for six years…

Good job! Seems to be rock solid to me.

…I never used a drive larger than 32 GB my self. Other people used it on 64 GB drives and had files larger than 2 GB…

I have been running a 128 GB card (1 partition, formatted FAT32) for days, and haven’t seen any problems. vMultiTaskStdioWithCWDTest has been running overnight, and I have run other tests that have created files in the hundreds of megabytes. I don’t anticipate using individual files any larger than that. I should see what happens if I write huge amounts of data. Maybe I can adapt the pseudo random number generator technique used in ChaN’s “Low level disk I/O module function checker” for checking the data.

But please try it out on a multi-partition drive, and please report how it goes. As long as all addressing is in (512-byte) sectors it should be possible.

For my project, I need the data on the SD card to be readable on Windows, and AFAIK, Windows will only look at one partition on an SD card.

You might have seen that FreeRTOS+FAT has a kind of mounting…

I am using that. Currently, I have two cards, one called “UserSDCrd” that I usually mount at “/UserSDCrd” and another called “SysSDCrd0” that I usually mount at “/SysSDCrd0” (but I have used other mount points). I have been running these simultaneously for days, with no problems. A simple chdir() to move from one card to the other. Right now, a test is running four vMultiTaskStdioWithCWDTest tasks on two cards. Soon, I will be adding a third card.

When you create 4 I/O-managers for 4 partitions, there will be a common driver below for low-level SD-card access. When you use more that 1 partition simultaneously, it may need some protection from mutex against concurrent access.

Yes, I have a mutex guard on each SPI (because I can run multiple cards on one SPI), and another on each SD card (since a card can really only do one thing at a time).

Thanks,

Carl

vMultiTaskStdioWithCWDTest Log.zip (3.3 KB)

OK, I ran the attached big_file_test.c with a file size of 0xC000 0000 (3 GB). It took 12+ hours, but no errors were found.

FreeRTOS+FAT+CLI is a little confused on the the file size (file big3):

> dir
System Volume Information [directory] [size=0]
SpeedNTemp.csv [writable file] [size=18299045]
Vibration.csv [writable file] [size=334784288]
.~lock.SpeedNTemp.csv# [writable file] [size=87]
big1 [writable file] [size=1000000]
big3 [writable file] [size=-1073741824]
.. [directory] [size=1024]
. [directory] [size=1024]

Windows is a little happier:

C:\Users\carlk>dir g:
 Volume in drive G is THE 128GB
 Volume Serial Number is 1315-2346

 Directory of G:\

The parameter is incorrect.
ā          18,299,045 SpeedNTemp.csv
The parameter is incorrect.
ā
         334,784,288 Vibration.csv
The parameter is incorrect.
ā           1,000,000 big1
The parameter is incorrect.
ā       3,221,225,472 big3
               4 File(s)  3,575,308,805 bytes
               0 Dir(s)  124,241,870,848 bytes free

C:\Users\carlk>chkdsk g:
The type of the file system is FAT32.
Volume THE 128GB created 3/16/2020 11:49 AM
Volume Serial Number is 1315-2346
Windows is verifying files and folders...
File and folder verification is complete.

Windows has scanned the file system and found no problems.
No further action is required.
  124,821,728 KB total disk space.
           64 KB in 2 hidden files.
           64 KB in 2 folders.
    3,491,616 KB in 6 files.
  121,329,952 KB are available.

       32,768 bytes in each allocation unit.
    3,900,679 total allocation units on disk.
    3,791,561 allocation units available on disk.

I don’t think it likes the lack of timestamps on the files. Something I need to work on.

Looking at it in Ubuntu on Windows Subsystem for Linux (WSL):

carlk@Dell:~$ sudo mkdir /mnt/g
carlk@Dell:~$ sudo mount -t drvfs G: /mnt/g
carlk@Dell:~$ ls -l /mnt/g
total 3491552
-rwxrwxrwx 1 root root   18299045 Dec 31  1969  SpeedNTemp.csv
drwxrwxrwx 1 root root        512 Mar 16 11:49 'System Volume Information'
-rwxrwxrwx 1 root root  334784288 Dec 31  1969  Vibration.csv
-rwxrwxrwx 1 root root    1000000 Dec 31  1969  big1
-rwxrwxrwx 1 root root 3221225472 Dec 31  1969  big3
carlk@Dell:~$

I’m not sure how to run dosfsck in this environment.

carlk@Dell:~$ mount
rootfs on / type lxfs (rw,noatime)
none on /dev type tmpfs (rw,noatime,mode=755)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,noatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,noatime)
devpts on /dev/pts type devpts (rw,nosuid,noexec,noatime,gid=5,mode=620)
none on /run type tmpfs (rw,nosuid,noexec,noatime,mode=755)
none on /run/lock type tmpfs (rw,nosuid,nodev,noexec,noatime)
none on /run/shm type tmpfs (rw,nosuid,nodev,noatime)
none on /run/user type tmpfs (rw,nosuid,nodev,noexec,noatime,mode=755)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
cgroup on /sys/fs/cgroup type tmpfs (rw,relatime,mode=755)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,relatime,devices)
C:\ on /mnt/c type drvfs (rw,noatime,uid=1000,gid=1000,case=off)
G: on /mnt/g type drvfs (rw,relatime,case=off)
carlk@Dell:~$ dosfsck /mnt/g
fsck.fat 4.1 (2017-01-24)
open: Is a directory
carlk@Dell:~$ dosfsck G:
fsck.fat 4.1 (2017-01-24)
open: No such file or directory

Next, I will make some copies of this 3 GB file to get near the 32 GB boundary and run some more tests.

big_file_test.c (2.7 KB)

OK, did that, and going over 32 GB is no problem. The test big_file_test ran with no errors.

In Windows Subsystem for Linux, before big_file_test run:

carlk@Dell:/mnt/g$ du -c -h *
0       System Volume Information/ClientRecoveryPasswordRotation
0       System Volume Information/AadRecoveryPasswordDelete
64K     System Volume Information
3.0G    big1
992K    big1MB
27G     hog
31G     total
carlk@Dell:~$ df -h /mnt/g
Filesystem      Size  Used Avail Use% Mounted on
G:              120G   31G   90G  26% /mnt/g

In Windows Subsystem for Linux, after big_file_test run:

carlk@Dell:/mnt/g$ du -c -h *
0       System Volume Information/ClientRecoveryPasswordRotation
0       System Volume Information/AadRecoveryPasswordDelete
64K     System Volume Information
3.0G    big1
992K    big1MB
3.0G    big2
27G     hog
34G     total
carlk@Dell:/mnt/g$ df -h /mnt/g
Filesystem      Size  Used Avail Use% Mounted on
G:              120G   34G   87G  28% /mnt/g

Windows Command Prompt:

C:\Users\carlk>chkdsk g:
The type of the file system is FAT32.
Volume THE 128GB created 3/16/2020 11:49 AM
Volume Serial Number is 1315-2346
Windows is verifying files and folders...
File and folder verification is complete.

Windows has scanned the file system and found no problems.
No further action is required.
  124,821,728 KB total disk space.
           64 KB in 2 hidden files.
           96 KB in 3 folders.
   34,604,064 KB in 14 files.
   90,217,472 KB are available.

       32,768 bytes in each allocation unit.
    3,900,679 total allocation units on disk.
    2,819,296 allocation units available on disk.

Next, I will try straddling the 64 GB boundary.

BTW, is what I’m doing in big_file_test.c grossly inefficient, or is this kind of test just going to take a long time? I currently have ffconfigCACHE_WRITE_THROUGH 1 and xIOManagerCacheSize = 4 * SECTOR_SIZE (i.e., 2 kB).

Thanks for these reports about your experiences!

FreeRTOS+FAT+CLI is a little confused on the the file size:
big3 [writable file] [size=-1073741824]

CLI is indeed printing the file size as signed in stead of unsigned. The size is about 3 GB.

BTW, is what I’m doing in big_file_test.c grossly inefficient,
or is this kind of test just going to take a long time?
I currently have ffconfigCACHE_WRITE_THROUGH 1 and
xIOManagerCacheSize = 4 * SECTOR_SIZE (i.e., 2 kB).

Looking at big_file_test.c : the problem is that it reads and writes a file in very small chunks. That creates a lot of overhead. But it is good if you want to test this type of access.

The macro ffconfigCACHE_WRITE_THROUGH does not intend to increase efficiency, it will make sure that cache buffers are written to disk as soon as they are released. So it increases safety.

If you read or write a file with buffers that are a multiple of 512 bytes, the caching system will be skipped: the data go directly from disk to buffer, and vice versa.
( even on a laptop I found this effect: using large buffers increases the access )

I rewrote my test program to buffer 4 sectors (i.e., blocks; 512 bytes each) of data for writes and reads. For a file of 0x10000000 bytes (0.25 GiB) this reduced the write times from 2203 s to 834 s (about 2.6x speedup) and read times from 1718 s to 736 s (about 2.3x). I will have to keep this in mind as I develop my real application, which in one scenario will be doing many thousands of printf()s of integers.

While I was at it, I also switched to a locally sourced pseudo random number generator so that I can run read comparisons on the PC.

Now that I can time runs, I should see if ffconfigCACHE_WRITE_THROUGH has any effect on speed.

big_file_test.c (6.3 KB)

With a Release build (some optimization), I was able to get the times down to 799 s and 712 s (around 4% faster).

Changing ffconfigCACHE_WRITE_THROUGH to 0 made no difference at all to the times, which makes sense if I am now skipping the caching system. (But would it make a difference for the smaller, 4-byte writes and reads that I was doing before?)

OK, I have done that successfully. The big_file_test ran over the 64 GB line with no errors. After big_file_test completed, I ran “MultiTask Stdio With CWD Test” for a while to make sure directories were still working well. No problems.

In Windows Subsystem for Linux, before big_file_test run:

carlk@Dell:~$ df -h /mnt/g
Filesystem      Size  Used Avail Use% Mounted on
G:              120G   62G   58G  52% /mnt/g
carlk@Dell:~$ du -c -h /mnt/g/*
0       /mnt/g/System Volume Information/ClientRecoveryPasswordRotation
0       /mnt/g/System Volume Information/AadRecoveryPasswordDelete
64K     /mnt/g/System Volume Information
3.0G    /mnt/g/big1
992K    /mnt/g/big1MB
3.0G    /mnt/g/big2
1.5G    /mnt/g/big3
27G     /mnt/g/hog
27G     /mnt/g/hog2
62G     total

In Windows Subsystem for Linux, after big_file_test run:

carlk@Dell:~$ df -h /mnt/g
Filesystem      Size  Used Avail Use% Mounted on
G:              120G   65G   55G  55% /mnt/g
carlk@Dell:~$ du -c -h /mnt/g/*
224K    /mnt/g/SpeedNTemp.csv
0       /mnt/g/System Volume Information/ClientRecoveryPasswordRotation
0       /mnt/g/System Volume Information/AadRecoveryPasswordDelete
64K     /mnt/g/System Volume Information
3.2M    /mnt/g/Vibration.csv
3.0G    /mnt/g/big1
992K    /mnt/g/big1MB
992K    /mnt/g/big1MB-1
3.0G    /mnt/g/big2
1.5G    /mnt/g/big3
3.0G    /mnt/g/big4
0       /mnt/g/big4-1
27G     /mnt/g/hog
27G     /mnt/g/hog2
65G     total

Windows Command Prompt:

C:\Users\carlk>chkdsk g:
The type of the file system is FAT32.
The volume is in use by another process. Chkdsk
might report errors when no corruption is present.
Volume THE 128GB created 3/16/2020 11:49 AM
Volume Serial Number is 1315-2346
Windows is verifying files and folders...
File and folder verification is complete.

Windows has scanned the file system and found no problems.
No further action is required.
  124,821,728 KB total disk space.
           64 KB in 2 hidden files.
          128 KB in 4 folders.
   67,638,688 KB in 29 files.
   57,182,816 KB are available.

       32,768 bytes in each allocation unit.
    3,900,679 total allocation units on disk.
    1,786,963 allocation units available on disk.

I will continue to press on deeper into this 128 GB card, but > 64GB does not seem to be a problem. Not sure where that ugly rumor came from.

It took 11,673 s (about 3.24 hrs) to write the 3 GB file. So, about ‭275,955‬ bytes/s, or 0.26 MB/s, or on the order of a GB/hour. So, copying a full 128GB card to another would take roughly 5 days (120 * 1024 * 1024 * 1024 / 275955 / 60 / 60 / 24).

To improve on that I suspect I’d need to buffer 32 kB (what chkdsk calls “32,768 bytes in each allocation unit.”). I only have 288-KB SRAM to work with in the whole system, so that might be hard to do. Or, maybe increasing the SCK frequency from 6 MHz to 10 MHz would help? I really don’t know much about SD cards.

1 Like