From: Vladimir Vassilevsky on


Mark Borgerson wrote:

> One other problem that I found with one FAT file system (I'm not sure
> whether it was FatFS): As files got longer, the time to do a file
> write got longer also. The problem was that the code would scan through
> a lot of the FAT in order to allocate a new cluster when needed. When
> a file got very large, that could mean reading many FAT sectors to find
> a new cluster to add to the file. When files are often open, written,
> and deleted, the empty clusters in the FAT can be anywhere in the
> table, and you may need to scan a lot of the table to find
> an empty cluster. There are ways around this. Some involve making
> assumptions about single-threaded file access, only one open file,
> or caching the FAT.
>
> If you will be writing large files and deleting files on the disk,
> I recommend you study the code that allocates a new cluster.

Yes-yes-yes. Once the things grow large, the FAT overhead increases
tremendously. Typical scennario in our applications: a directory of
~10000 files. One thread writes to a file in this directory, the other
thread scans through files in the same directory. In order to ensure
coherency, the directory has to be re-read and re-sorted at every access.
The development of efficient multithreaded filesystem is no simple task;
it needs a lot of memory, too.

Vladimir Vassilevsky
DSP and Mixed Signal Design Consultant
http://www.abvolt.com

From: Peter Dickerson on
"Vladimir Vassilevsky" <nospam(a)nowhere.com> wrote in message
news:Hoidne-fJ9RHwSrWnZ2dnUVZ_vydnZ2d(a)giganews.com...
>
>
> Mark Borgerson wrote:
>
>> One other problem that I found with one FAT file system (I'm not sure
>> whether it was FatFS): As files got longer, the time to do a file
>> write got longer also. The problem was that the code would scan through
>> a lot of the FAT in order to allocate a new cluster when needed. When
>> a file got very large, that could mean reading many FAT sectors to find
>> a new cluster to add to the file. When files are often open, written,
>> and deleted, the empty clusters in the FAT can be anywhere in the
>> table, and you may need to scan a lot of the table to find
>> an empty cluster. There are ways around this. Some involve making
>> assumptions about single-threaded file access, only one open file,
>> or caching the FAT.
>>
>> If you will be writing large files and deleting files on the disk,
>> I recommend you study the code that allocates a new cluster.
>
> Yes-yes-yes. Once the things grow large, the FAT overhead increases
> tremendously. Typical scennario in our applications: a directory of ~10000
> files. One thread writes to a file in this directory, the other thread
> scans through files in the same directory. In order to ensure coherency,
> the directory has to be re-read and re-sorted at every access.
> The development of efficient multithreaded filesystem is no simple task;
> it needs a lot of memory, too.

In fact I'd say using FAT with 10k files in one directory is a design flaw.
This is particularly a problem for the systems that try to constrain the
foot print to one or two sectors. There is also a problem for long file
names making sure the 8.3 name is unique and shuffling directory entries to
make space for unicode entries.

Peter


From: Peter Dickerson on
"Peter" <nospam(a)nospam9876.com> wrote in message
news:nbver5la7vs5kh77md9dt2iuqou2a66uec(a)4ax.com...
> Many years ago I embedded CP/M2.2 for this purpose :)
>
> We got around the licensing by buying the same # of old CP/M floppy
> disks.
>
> Not sure one could do this today, but who would care if you just went
> ahead? Presumably the file system will not be user-visible.

I took it to be about removable media. I certainly wouldn't (and don't) use
FAT for deeply embedded stuff. For the case where the media needs to be
transfered to a PC then FAT is the simplest choice. In that case I always
make sure there is an in-use indicator to warn that the media should be
removed. It may not help much, but at least the user has reason to blame
themself when things go wrong.

Peter


From: Mark Borgerson on
In article <hp99ik$38v$1(a)news.eternal-september.org>,
first.last(a)tiscali.invalid says...
> "Peter" <nospam(a)nospam9876.com> wrote in message
> news:nbver5la7vs5kh77md9dt2iuqou2a66uec(a)4ax.com...
> > Many years ago I embedded CP/M2.2 for this purpose :)
> >
> > We got around the licensing by buying the same # of old CP/M floppy
> > disks.
> >
> > Not sure one could do this today, but who would care if you just went
> > ahead? Presumably the file system will not be user-visible.
>
> I took it to be about removable media. I certainly wouldn't (and don't) use
> FAT for deeply embedded stuff. For the case where the media needs to be
> transfered to a PC then FAT is the simplest choice. In that case I always
> make sure there is an in-use indicator to warn that the media should be
> removed. It may not help much, but at least the user has reason to blame
> themself when things go wrong.
>
In some SD-based loggers where I need predictable file write times, I'm
now using a simple sequential file system. On a PC, the SDHC card comes
up as unformatted with no file system. I then use a PC application that
uses device block reads to get the data from the card. On the PC, I can
get files from the SDHC card with read speeds on the order of 15MB/s.
That's a lot better than I can do with a USB upload speed of a few
hundred KBytes/s!


Mark Borgerson