Copy edit littlefs

ExhaustibleBlockDevice.h
- Fix typos for consistent spelling.
ObservingBlockDevice.h
- Fix typos for consistent spelling.
ReadOnlyBlockDevice.h
- Fix typos for consistent spelling.
README.md
- Fix typos, mostly for branding.
DESIGN.md
- Make minor changes for consistent spelling and precise language.
SPEC.md
- Make minor changes for consistent spelling and precise language.
README.md
- Make minor changes for consistent spelling and precise language.
pull/5538/head
Amanda Butler 2017-11-28 13:06:24 -06:00 committed by Christopher Haster
parent ff25681a21
commit 634fcf0cc4
7 changed files with 163 additions and 164 deletions

View File

@ -106,15 +106,15 @@ public:
*/
virtual bd_size_t get_read_size() const;
/** Get the size of a programable block
/** Get the size of a programmable block
*
* @return Size of a programable block in bytes
* @return Size of a programmable block in bytes
*/
virtual bd_size_t get_program_size() const;
/** Get the size of a eraseable block
/** Get the size of a erasable block
*
* @return Size of a eraseable block in bytes
* @return Size of a erasable block in bytes
*/
virtual bd_size_t get_erase_size() const;

View File

@ -92,15 +92,15 @@ public:
*/
virtual bd_size_t get_read_size() const;
/** Get the size of a programable block
/** Get the size of a programmable block
*
* @return Size of a programable block in bytes
* @return Size of a programmable block in bytes
*/
virtual bd_size_t get_program_size() const;
/** Get the size of a eraseable block
/** Get the size of a erasable block
*
* @return Size of a eraseable block in bytes
* @return Size of a erasable block in bytes
*/
virtual bd_size_t get_erase_size() const;

View File

@ -85,15 +85,15 @@ public:
*/
virtual bd_size_t get_read_size() const;
/** Get the size of a programable block
/** Get the size of a programmable block
*
* @return Size of a programable block in bytes
* @return Size of a programmable block in bytes
*/
virtual bd_size_t get_program_size() const;
/** Get the size of a eraseable block
/** Get the size of a erasable block
*
* @return Size of a eraseable block in bytes
* @return Size of a erasable block in bytes
*/
virtual bd_size_t get_erase_size() const;

View File

@ -1,4 +1,4 @@
## mbed wrapper for the little filesystem
## Mbed wrapper for the little filesystem
This is the mbed wrapper for [littlefs](https://github.com/geky/littlefs),
a little fail-safe filesystem designed for embedded systems.
@ -13,20 +13,20 @@ a little fail-safe filesystem designed for embedded systems.
```
**Bounded RAM/ROM** - The littlefs is designed to work with a limited amount
of memory. Recursion is avoided and dynamic memory is limited to configurable
of memory. Recursion is avoided, and dynamic memory is limited to configurable
buffers that can be provided statically.
**Power-loss resilient** - The littlefs is designed for systems that may have
random power failures. The littlefs has strong copy-on-write guaruntees and
random power failures. The littlefs has strong copy-on-write guarantees, and
storage on disk is always kept in a valid state.
**Wear leveling** - Since the most common form of embedded storage is erodible
**Wear leveling** - Because the most common form of embedded storage is erodible
flash memories, littlefs provides a form of dynamic wear leveling for systems
that can not fit a full flash translation layer.
that cannot fit a full flash translation layer.
## Usage
If you are already using a filesystem in mbed, adopting the littlefs should
If you are already using a filesystem in Mbed, adopting the littlefs should
just require a name change to use the [LittleFileSystem](LittleFileSystem.h)
class.
@ -82,11 +82,11 @@ int main() {
## Reference material
[DESIGN.md](littlefs/DESIGN.md) - DESIGN.md contains a fully detailed dive into
how littlefs actually works. I would encourage you to read it since the
how littlefs actually works. We encourage you to read it because the
solutions and tradeoffs at work here are quite interesting.
[SPEC.md](littlefs/SPEC.md) - SPEC.md contains the on-disk specification of
littlefs with all the nitty-gritty details. Can be useful for developing
littlefs with all the nitty-gritty details. This can be useful for developing
tooling.
## Related projects
@ -96,9 +96,9 @@ currently lives.
[littlefs-fuse](https://github.com/geky/littlefs-fuse) - A [FUSE](https://github.com/libfuse/libfuse)
wrapper for littlefs. The project allows you to mount littlefs directly in a
Linux machine. Can be useful for debugging littlefs if you have an SD card
Linux machine. This can be useful for debugging littlefs if you have an SD card
handy.
[littlefs-js](https://github.com/geky/littlefs-js) - A javascript wrapper for
[littlefs-js](https://github.com/geky/littlefs-js) - A JavaScript wrapper for
littlefs. I'm not sure why you would want this, but it is handy for demos.
You can see it in action [here](http://littlefs.geky.net/demo.html).

View File

@ -23,7 +23,7 @@ often paired with SPI NOR flash chips with about 4MB of flash storage.
Flash itself is a very interesting piece of technology with quite a bit of
nuance. Unlike most other forms of storage, writing to flash requires two
operations: erasing and programming. The programming operation is relatively
cheap, and can be very granular. For NOR flash specifically, byte-level
cheap and can be very granular. For NOR flash specifically, byte-level
programs are quite common. Erasing, however, requires an expensive operation
that forces the state of large blocks of memory to reset in a destructive
reaction that gives flash its name. The [Wikipedia entry](https://en.wikipedia.org/wiki/Flash_memory)
@ -35,7 +35,7 @@ to three strong requirements:
1. **Power-loss resilient** - This is the main goal of the littlefs and the
focus of this project. Embedded systems are usually designed without a
shutdown routine and a notable lack of user interface for recovery, so
filesystems targeting embedded systems must be prepared to lose power an
filesystems targeting embedded systems must be prepared to lose power at
any given time.
Despite this state of things, there are very few embedded filesystems that
@ -67,7 +67,7 @@ to three strong requirements:
using only a bounded amount of RAM and ROM. That is, no matter what is
written to the filesystem, and no matter how large the underlying storage
is, the littlefs will always use the same amount of RAM and ROM. This
presents a very unique challenge, and makes presumably simple operations,
presents a very unique challenge and makes presumably simple operations,
such as iterating through the directory tree, surprisingly difficult.
## Existing designs?
@ -86,12 +86,12 @@ One of the most popular designs for flash filesystems is called the
[logging filesystem](https://en.wikipedia.org/wiki/Log-structured_file_system).
The flash filesystems [jffs](https://en.wikipedia.org/wiki/JFFS)
and [yaffs](https://en.wikipedia.org/wiki/YAFFS) are good examples. In
logging filesystem, data is not store in a data structure on disk, but instead
logging filesystem, data is not stored in a data structure on disk, but instead
the changes to the files are stored on disk. This has several neat advantages,
such as the fact that the data is written in a cyclic log format naturally
wear levels as a side effect. And, with a bit of error detection, the entire
levels wear as a side effect. And, with a bit of error detection, the entire
filesystem can easily be designed to be resilient to power loss. The
journalling component of most modern day filesystems is actually a reduced
journaling component of most modern day filesystems is actually a reduced
form of a logging filesystem. However, logging filesystems have a difficulty
scaling as the size of storage increases. And most filesystems compensate by
caching large parts of the filesystem in RAM, a strategy that is unavailable
@ -114,7 +114,7 @@ pairs, so that at any time there is always a backup containing the previous
state of the metadata.
Consider a small example where each metadata pair has a revision count,
a number as data, and the xor of the block as a quick checksum. If
a number as data and the xor of the block as a quick checksum. If
we update the data to a value of 9, and then to a value of 5, here is
what the pair of blocks may look like after each update:
```
@ -130,7 +130,7 @@ what the pair of blocks may look like after each update:
After each update, we can find the most up to date value of data by looking
at the revision count.
Now consider what the blocks may look like if we suddenly loss power while
Now consider what the blocks may look like if we suddenly lose power while
changing the value of data to 5:
```
block 1 block 2 block 1 block 2 block 1 block 2
@ -145,7 +145,7 @@ changing the value of data to 5:
In this case, block 1 was partially written with a new revision count, but
the littlefs hadn't made it to updating the value of data. However, if we
check our checksum we notice that block 1 was corrupted. So we fall back to
check our checksum, we notice that block 1 was corrupted. So we fall back to
block 2 and use the value 9.
Using this concept, the littlefs is able to update metadata blocks atomically.
@ -154,14 +154,14 @@ arithmetic to handle revision count overflow, but the basic concept
is the same. These metadata pairs define the backbone of the littlefs, and the
rest of the filesystem is built on top of these atomic updates.
## Non-meta data
## Nonmeta data
Now, the metadata pairs do come with some drawbacks. Most notably, each pair
requires two blocks for each block of data. I'm sure users would be very
unhappy if their storage was suddenly cut in half! Instead of storing
unhappy if their storage were suddenly cut in half! Instead of storing
everything in these metadata blocks, the littlefs uses a COW data structure
for files which is in turn pointed to by a metadata block. When
we update a file, we create a copies of any blocks that are modified until
for files, which is, in turn, pointed to by a metadata block. When
we update a file, we create copies of any blocks that are modified until
the metadata blocks are updated with the new copy. Once the metadata block
points to the new copy, we deallocate the old blocks that are no longer in use.
@ -184,8 +184,8 @@ Here is what updating a one-block file may look like:
update data in file update metadata pair
```
It doesn't matter if we lose power while writing block 5 with the new data,
since the old data remains unmodified in block 4. This example also
It doesn't matter if we lose power while writing block 5 with the new data
because the old data remains unmodified in block 4. This example also
highlights how the atomic updates of the metadata blocks provide a
synchronization barrier for the rest of the littlefs.
@ -206,10 +206,10 @@ files in filesystems. Of these, the littlefs uses a rather unique [COW](https://
data structure that allows the filesystem to reuse unmodified parts of the
file without additional metadata pairs.
First lets consider storing files in a simple linked-list. What happens when
append a block? We have to change the last block in the linked-list to point
to this new block, which means we have to copy out the last block, and change
the second-to-last block, and then the third-to-last, and so on until we've
First, let's consider storing files in a simple linked-list. What happens when
we append a block? We have to change the last block in the linked-list to point
to this new block, which means we have to copy out the last block and change
the second-to-last block and then the third-to-last and so on until we've
copied out the entire file.
```
@ -221,12 +221,12 @@ Exhibit A: A linked-list
'--------' '--------' '--------' '--------' '--------' '--------'
```
To get around this, the littlefs, at its heart, stores files backwards. Each
To get around this, the littlefs, at its heart, stores files backward. Each
block points to its predecessor, with the first block containing no pointers.
If you think about for a while, it starts to make a bit of sense. Appending
blocks just point to their predecessor and no other blocks need to be updated.
If you think about it for a while, it starts to make a bit of sense. Appending
blocks just point to their predecessor, and no other blocks need to be updated.
If we update a block in the middle, we will need to copy out the blocks that
follow, but can reuse the blocks before the modified block. Since most file
follow but can reuse the blocks before the modified block. Because most file
operations either reset the file each write or append to files, this design
avoids copying the file in the most common cases.
@ -239,7 +239,7 @@ Exhibit B: A backwards linked-list
'--------' '--------' '--------' '--------' '--------' '--------'
```
However, a backwards linked-list does come with a rather glaring problem.
However, a backward linked-list does come with a rather glaring problem.
Iterating over a file _in order_ has a runtime of O(n^2). Gah! A quadratic
runtime to just _read_ a file? That's awful. Keep in mind reading files are
usually the most common filesystem operation.
@ -257,7 +257,7 @@ instruction, which allows us to calculate the power-of-two factors efficiently.
For a given block n, the block contains ctz(n)+1 pointers.
```
Exhibit C: A backwards CTZ skip-list
Exhibit C: A backward CTZ skip-list
.--------. .--------. .--------. .--------. .--------. .--------.
| data 0 |<-| data 1 |<-| data 2 |<-| data 3 |<-| data 4 |<-| data 5 |
| |<-| |--| |<-| |--| | | |
@ -268,7 +268,7 @@ Exhibit C: A backwards CTZ skip-list
The additional pointers allow us to navigate the data-structure on disk
much more efficiently than in a single linked-list.
Taking exhibit C for example, here is the path from data block 5 to data
Taking exhibit C, for example, here is the path from data block 5 to data
block 1. You can see how data block 3 was completely skipped:
```
.--------. .--------. .--------. .--------. .--------. .--------.
@ -278,7 +278,7 @@ block 1. You can see how data block 3 was completely skipped:
'--------' '--------' '--------' '--------' '--------' '--------'
```
The path to data block 0 is even more quick, requiring only two jumps:
The path to data block 0 is even quicker, requiring only two jumps:
```
.--------. .--------. .--------. .--------. .--------. .--------.
| data 0 | | data 1 | | data 2 | | data 3 | | data 4 |<-| data 5 |
@ -291,13 +291,13 @@ We can find the runtime complexity by looking at the path to any block from
the block containing the most pointers. Every step along the path divides
the search space for the block in half. This gives us a runtime of O(logn).
To get to the block with the most pointers, we can perform the same steps
backwards, which puts the runtime at O(2logn) = O(logn). The interesting
backward, which puts the runtime at O(2logn) = O(logn). The interesting
part about this data structure is that this optimal path occurs naturally
if we greedily choose the pointer that covers the most distance without passing
our target block.
So now we have a representation of files that can be appended trivially with
a runtime of O(1), and can be read with a worst case runtime of O(nlogn).
a runtime of O(1) and can be read with a worst case runtime of O(nlogn).
Given that the the runtime is also divided by the amount of data we can store
in a block, this is pretty reasonable.
@ -317,9 +317,9 @@ per block.
![overhead_per_block](https://latex.codecogs.com/svg.latex?%5Clim_%7Bn%5Cto%5Cinfty%7D%5Cfrac%7B1%7D%7Bn%7D%5Csum_%7Bi%3D0%7D%5E%7Bn%7D%5Cleft%28%5Ctext%7Bctz%7D%28i%29&plus;1%5Cright%29%20%3D%20%5Csum_%7Bi%3D0%7D%5Cfrac%7B1%7D%7B2%5Ei%7D%20%3D%202)
Finding the maximum number of pointers in a block is a bit more complicated,
but since our file size is limited by the integer width we use to store the
but because our file size is limited by the integer width we use to store the
size, we can solve for it. Setting the overhead of the maximum pointers equal
to the block size we get the following equation. Note that a smaller block size
to the block size, we get the following equation. Note that a smaller block size
results in more pointers, and a larger word width results in larger pointers.
![maximum overhead](https://latex.codecogs.com/svg.latex?B%20%3D%20%5Cfrac%7Bw%7D%7B8%7D%5Cleft%5Clceil%5Clog_2%5Cleft%28%5Cfrac%7B2%5Ew%7D%7BB-2%5Cfrac%7Bw%7D%7B8%7D%7D%5Cright%29%5Cright%5Crceil)
@ -333,19 +333,19 @@ widths:
32 bit CTZ skip-list = minimum block size of 104 bytes
64 bit CTZ skip-list = minimum block size of 448 bytes
Since littlefs uses a 32 bit word size, we are limited to a minimum block
Because littlefs uses a 32 bit word size, we are limited to a minimum block
size of 104 bytes. This is a perfectly reasonable minimum block size, with most
block sizes starting around 512 bytes. So we can avoid additional logic to
avoid overflowing our block's capacity in the CTZ skip-list.
So, how do we store the skip-list in a directory entry? A naive approach would
be to store a pointer to the head of the skip-list, the length of the file
in bytes, the index of the head block in the skip-list, and the offset in the
head block in bytes. However this is a lot of information, and we can observe
in bytes, the index of the head block in the skip-list and the offset in the
head block in bytes. However, this is a lot of information, and we can observe
that a file size maps to only one block index + offset pair. So it should be
sufficient to store only the pointer and file size.
But there is one problem, calculating the block index + offset pair from a
But there is one problem: Calculating the block index plus offset pair from a
file size doesn't have an obvious implementation.
We can start by just writing down an equation. The first idea that comes to
@ -360,7 +360,7 @@ w = word width in bits
n = block index in skip-list
N = file size in bytes
And this works quite well, but is not trivial to calculate. This equation
And this works quite well but is not trivial to calculate. This equation
requires O(n) to compute, which brings the entire runtime of reading a file
to O(n^2logn). Fortunately, the additional O(n) does not need to touch disk,
so it is not completely unreasonable. But if we could solve this equation into
@ -372,7 +372,7 @@ Fortunately, there is a powerful tool I've found useful in these situations:
The [On-Line Encyclopedia of Integer Sequences (OEIS)](https://oeis.org/).
If we work out the first couple of values in our summation, we find that CTZ
maps to [A001511](https://oeis.org/A001511), and its partial summation maps
to [A005187](https://oeis.org/A005187), and surprisingly, both of these
to [A005187](https://oeis.org/A005187), and, surprisingly, both of these
sequences have relatively trivial equations! This leads us to a rather
unintuitive property:
@ -383,9 +383,9 @@ ctz(i) = the number of trailing bits that are 0 in i
popcount(i) = the number of bits that are 1 in i
It's a bit bewildering that these two seemingly unrelated bitwise instructions
are related by this property. But if we start to disect this equation we can
are related by this property. But if we start to dissect this equation, we can
see that it does hold. As n approaches infinity, we do end up with an average
overhead of 2 pointers as we find earlier. And popcount seems to handle the
overhead of 2 pointers as we found earlier. And popcount seems to handle the
error from this average as it accumulates in the CTZ skip-list.
Now we can substitute into the original equation to get a trivial equation
@ -393,7 +393,7 @@ for a file size:
![summation2](https://latex.codecogs.com/svg.latex?N%20%3D%20Bn%20-%20%5Cfrac%7Bw%7D%7B8%7D%5Cleft%282n-%5Ctext%7Bpopcount%7D%28n%29%5Cright%29)
Unfortunately, we're not quite done. The popcount function is non-injective,
Unfortunately, we're not quite done. The popcount function is noninjective,
so we can only find the file size from the block index, not the other way
around. However, we can solve for an n' block index that is greater than n
with an error bounded by the range of the popcount function. We can then
@ -410,7 +410,7 @@ a bit to avoid integer overflow:
![formulaforoff](https://latex.codecogs.com/svg.latex?%5Cmathit%7Boff%7D%20%3D%20N%20-%20%5Cleft%28B-2%5Cfrac%7Bw%7D%7B8%7D%5Cright%29n%20-%20%5Cfrac%7Bw%7D%7B8%7D%5Ctext%7Bpopcount%7D%28n%29)
The solution involves quite a bit of math, but computers are very good at math.
We can now solve for the block index + offset while only needed to store the
We can now solve for the block index plus offset while only needed to store the
file size in O(1).
Here is what it might look like to update a file stored with a CTZ skip-list:
@ -496,20 +496,20 @@ initially the littlefs was designed with this in mind. By storing a reference
to the free list in every single metadata pair, additions to the free list
could be updated atomically at the same time the replacement blocks were
stored in the metadata pair. During boot, every metadata pair had to be
scanned to find the most recent free list, but once the list was found the
scanned to find the most recent free list, but once the list is found, the
state of all free blocks becomes known.
However, this approach had several issues:
- There was a lot of nuanced logic for adding blocks to the free list without
modifying the blocks, since the blocks remain active until the metadata is
modifying the blocks because the blocks remain active until the metadata is
updated.
- The free list had to support both additions and removals in fifo order while
- The free list had to support both additions and removals in FIFO order while
minimizing block erases.
- The free list had to handle the case where the file system completely ran
out of blocks and may no longer be able to add blocks to the free list.
- If we used a revision count to track the most recently updated free list,
metadata blocks that were left unmodified were ticking time bombs that would
cause the system to go haywire if the revision count overflowed
cause the system to go haywire if the revision count overflowed.
- Every single metadata block wasted space to store these free list references.
Actually, to simplify, this approach had one massive glaring issue: complexity.
@ -525,7 +525,7 @@ In the end, the littlefs adopted more of a "drop it on the floor" strategy.
That is, the littlefs doesn't actually store information about which blocks
are free on the storage. The littlefs already stores which files _are_ in
use, so to find a free block, the littlefs just takes all of the blocks that
exist and subtract the blocks that are in use.
exist and subtracts the blocks that are in use.
Of course, it's not quite that simple. Most filesystems that adopt this "drop
it on the floor" strategy either rely on some properties inherent to the
@ -539,8 +539,8 @@ would have an abhorrent runtime.
So the littlefs compromises. It doesn't store a bitmap the size of the storage,
but it does store a little bit-vector that contains a fixed set lookahead
for block allocations. During a block allocation, the lookahead vector is
checked for any free blocks, if there are none, the lookahead region jumps
forward and the entire filesystem is scanned for free blocks.
checked for any free blocks. If there are none, the lookahead region jumps
forward, and the entire filesystem is scanned for free blocks.
Here's what it might look like to allocate 4 blocks on a decently busy
filesystem with a 32bit lookahead and a total of
@ -570,7 +570,7 @@ alloc = 112 lookahead: ffff8000
While this lookahead approach still has an asymptotic runtime of O(n^2) to
scan all of storage, the lookahead reduces the practical runtime to a
reasonable amount. Bit-vectors are surprisingly compact, given only 16 bytes,
reasonable amount. Bit-vectors are surprisingly compact. Given only 16 bytes,
the lookahead could track 128 blocks. For a 4Mbyte flash chip with 4Kbyte
blocks, the littlefs would only need 8 passes to scan the entire storage.
@ -581,12 +581,12 @@ causing difficult to detect memory leaks.
## Directories
Now we just need directories to store our files. Since we already have
metadata blocks that store information about files, lets just use these
Now we just need directories to store our files. Because we already have
metadata blocks that store information about files, let's just use these
metadata blocks as the directories. Maybe turn the directories into linked
lists of metadata blocks so it isn't limited by the number of files that fit
lists of metadata blocks, so it isn't limited by the number of files that fit
in a single block. Add entries that represent other nested directories.
Drop "." and ".." entries, cause who needs them. Dust off our hands and
Drop "." and ".." entries because who needs them? Dust off our hands, and
we now have a directory tree.
```
@ -611,17 +611,17 @@ we now have a directory tree.
'--------' '--------' '--------' '--------' '--------'
```
Unfortunately it turns out it's not that simple. See, iterating over a
Unfortunately, it turns out it's not that simple. See, iterating over a
directory tree isn't actually all that easy, especially when you're trying
to fit in a bounded amount of RAM, which rules out any recursive solution.
And since our block allocator involves iterating over the entire filesystem
And because our block allocator involves iterating over the entire filesystem
tree, possibly multiple times in a single allocation, iteration needs to be
efficient.
So, as a solution, the littlefs adopted a sort of threaded tree. Each
directory not only contains pointers to all of its children, but also a
pointer to the next directory. These pointers create a linked-list that
is threaded through all of the directories in the filesystem. Since we
is threaded through all of the directories in the filesystem. Because we
only use this linked list to check for existance, the order doesn't actually
matter. As an added plus, we can repurpose the pointer for the individual
directory linked-lists and avoid using any additional space.
@ -648,16 +648,16 @@ directory linked-lists and avoid using any additional space.
'--------' '--------' '--------' '--------' '--------'
```
This threaded tree approach does come with a few tradeoffs. Now, anytime we
This threaded tree approach does come with a few tradeoffs. Now, any time we
want to manipulate the directory tree, we find ourselves having to update two
pointers instead of one. For anyone familiar with creating atomic data
structures this should set off a whole bunch of red flags.
structures, this should set off a whole bunch of red flags.
But unlike the data structure guys, we can update a whole block atomically! So
But unlike the data structure people, we can update a whole block atomically! So
as long as we're really careful (and cheat a little bit), we can still
manipulate the directory tree in a way that is resilient to power loss.
Consider how we might add a new directory. Since both pointers that reference
Consider how we might add a new directory. Because both pointers that reference
it can come from the same directory, we only need a single atomic update to
finagle the directory into the filesystem:
```
@ -759,7 +759,7 @@ v
'--------' '--------'
```
Wait, wait, wait, that's not atomic at all! If power is lost after removing
Wait, wait, wait; that's not atomic at all! If power is lost after removing
directory B from the root, directory B is still in the linked-list. We've
just created a memory leak!
@ -850,18 +850,18 @@ lose power inconveniently.
Initially, you might think this is fine. Dir A _might_ end up with two parents,
but the filesystem will still work as intended. But then this raises the
question of what do we do when the dir A wears out? For other directory blocks
we can update the parent pointer, but for a dir with two parents we would need
work out how to update both parents. And the check for multiple parents would
question of what do we do when the dir A wears out? For other directory blocks,
we can update the parent pointer, but for a dir with two parents, we would need
to work out how to update both parents. And the check for multiple parents would
need to be carried out for every directory, even if the directory has never
been moved.
It also presents a bad user-experience, since the condition of ending up with
It also presents a bad user-experience. Because the condition of ending up with
two parents is rare, it's unlikely user-level code will be prepared. Just think
about how a user would recover from a multi-parented directory. They can't just
remove one directory, since remove would report the directory as "not empty".
about how users would recover from a multiparented directory. They can't just
remove one directory because remove would report the directory as "not empty".
Other atomic filesystems simple COW the entire directory tree. But this
Other atomic filesystems simply COW the entire directory tree. But this
introduces a significant bit of complexity, which leads to code size, along
with a surprisingly expensive runtime cost during what most users assume is
a single pointer update.
@ -969,7 +969,7 @@ of two things is possible. Either the directory entry exists elsewhere in the
filesystem, or it doesn't. This is a O(n) operation, but only occurs in the
unlikely case we lost power during a move.
And we can easily fix the "moved" directory entry. Since we're already scanning
And we can easily fix the "moved" directory entry. Because we're already scanning
the filesystem during the deorphan step, we can also check for moved entries.
If we find one, we either remove the "moved" marking or remove the whole entry
if it exists elsewhere in the filesystem.
@ -979,7 +979,7 @@ if it exists elsewhere in the filesystem.
So now that we have all of the pieces of a filesystem, we can look at a more
subtle attribute of embedded storage: The wear down of flash blocks.
The first concern for the littlefs, is that prefectly valid blocks can suddenly
The first concern for the littlefs is that prefectly valid blocks can suddenly
become unusable. As a nice side-effect of using a COW data-structure for files,
we can simply move on to a different block when a file write fails. All
modifications to files are performed in copies, so we will only replace the
@ -988,7 +988,7 @@ the other hand, need a different strategy.
The solution to directory corruption in the littlefs relies on the redundant
nature of the metadata pairs. If an error is detected during a write to one
of the metadata pairs, we seek out a new block to take its place. Once we find
of the metadata pairs, we seek a new block to take its place. Once we find
a block without errors, we iterate through the directory tree, updating any
references to the corrupted metadata pair to point to the new metadata block.
Just like when we remove directories, we can lose power during this operation
@ -1141,8 +1141,8 @@ v
'---------'---------' '---------'---------' '---------'---------'
```
Also one question I've been getting is, what about the root directory?
It can't move so wouldn't the filesystem die as soon as the root blocks
Also one question I've been getting is: What about the root directory?
It can't move, so wouldn't the filesystem die as soon as the root blocks
develop errors? And you would be correct. So instead of storing the root
in the first few blocks of the storage, the root is actually pointed to
by the superblock. The superblock contains a few bits of static data, but
@ -1151,9 +1151,9 @@ develops errors and needs to be moved.
## Wear leveling
The second concern for the littlefs, is that blocks in the filesystem may wear
unevenly. In this situation, a filesystem may meet an early demise where
there are no more non-corrupted blocks that aren't in use. It's common to
The second concern for the littlefs is that blocks in the filesystem may wear
unevenly. In this situation, a filesystem may meet an early demise whe,
there are no more noncorrupted blocks that aren't in use. It's common to
have files that were written once and left unmodified, wasting the potential
erase cycles of the blocks it sits on.
@ -1180,22 +1180,21 @@ handle the case of write-once files, and near the end of the lifetime of a
flash device, you would likely end up with uneven wear on the blocks anyways.
As a flash device reaches the end of its life, the metadata blocks will
naturally be the first to go since they are updated most often. In this
naturally be the first to go because they are updated most often. In this
situation, the littlefs is designed to simply move on to another set of
metadata blocks. This travelling means that at the end of a flash device's
metadata blocks. This traveling means that at the end of a flash device's
life, the filesystem will have worn the device down nearly as evenly as the
usual dynamic wear leveling could. More aggressive wear leveling would come
with a code-size cost for marginal benefit.
One important takeaway to note, if your storage stack uses highly sensitive
storage such as NAND flash, static wear leveling is the only valid solution.
In most cases you are going to be better off using a full [flash translation layer (FTL)](https://en.wikipedia.org/wiki/Flash_translation_layer).
One important takeaway to note:, If your storage stack uses highly sensitive
storage, such as NAND flash, static wear leveling is the only valid solution.
In most cases, you are going to be better off using a full [flash translation layer (FTL)](https://en.wikipedia.org/wiki/Flash_translation_layer).
NAND flash already has many limitations that make it poorly suited for an
embedded system: low erase cycles, very large blocks, errors that can develop
even during reads, errors that can develop during writes of neighboring blocks.
Managing sensitive storage such as NAND flash is out of scope for the littlefs.
The littlefs does have some properties that may be beneficial on top of a FTL,
Managing sensitive storage, such as NAND flash, is out of scope for the littlefs.
The littlefs does have some properties that may be beneficial on top of an FTL,
such as limiting the number of writes where possible, but if you have the
storage requirements that necessitate the need of NAND flash, you should have
the RAM to match and just use an FTL or flash filesystem.
@ -1204,21 +1203,21 @@ the RAM to match and just use an FTL or flash filesystem.
So, to summarize:
1. The littlefs is composed of directory blocks
2. Each directory is a linked-list of metadata pairs
1. The littlefs is composed of directory blocks.
2. Each directory is a linked-list of metadata pairs.
3. These metadata pairs can be updated atomically by alternating which
metadata block is active
4. Directory blocks contain either references to other directories or files
5. Files are represented by copy-on-write CTZ skip-lists which support O(1)
append and O(nlogn) reading
metadata block is active.
4. Directory blocks contain either references to other directories or files.
5. Files are represented by copy-on-write CTZ skip-lists, which support O(1)
append and O(nlogn) reading.
6. Blocks are allocated by scanning the filesystem for used blocks in a
fixed-size lookahead region is that stored in a bit-vector
fixed-size lookahead region is that stored in a bit-vector.
7. To facilitate scanning the filesystem, all directories are part of a
linked-list that is threaded through the entire filesystem
8. If a block develops an error, the littlefs allocates a new block, and
linked-list that is threaded through the entire filesystem.
8. If a block develops an error, the littlefs allocates a new block and
moves the data and references of the old block to the new.
9. Any case where an atomic operation is not possible, mistakes are resolved
by a deorphan step that occurs on the first allocation after boot
by a deorphan step that occurs on the first allocation after boot.
That's the little filesystem. Thanks for reading!

View File

@ -12,16 +12,16 @@ A little fail-safe filesystem designed for embedded systems.
```
**Bounded RAM/ROM** - The littlefs is designed to work with a limited amount
of memory. Recursion is avoided and dynamic memory is limited to configurable
of memory. Recursion is avoided, and dynamic memory is limited to configurable
buffers that can be provided statically.
**Power-loss resilient** - The littlefs is designed for systems that may have
random power failures. The littlefs has strong copy-on-write guaruntees and
random power failures. The littlefs has strong copy-on-write guaruntees, and
storage on disk is always kept in a valid state.
**Wear leveling** - Since the most common form of embedded storage is erodible
**Wear leveling** - Because the most common form of embedded storage is erodible
flash memories, littlefs provides a form of dynamic wear leveling for systems
that can not fit a full flash translation layer.
that cannot fit a full flash translation layer.
## Example
@ -93,20 +93,20 @@ can be cound in the comments in [lfs.h](lfs.h).
As you may have noticed, littlefs takes in a configuration structure that
defines how the filesystem operates. The configuration struct provides the
filesystem with the block device operations and dimensions, tweakable
parameters that tradeoff memory usage for performance, and optional
parameters that trade memory usage for performance and optional
static buffers if the user wants to avoid dynamic memory.
The state of the littlefs is stored in the `lfs_t` type which is left up
The state of the littlefs is stored in the `lfs_t` type, which is left up
to the user to allocate, allowing multiple filesystems to be in use
simultaneously. With the `lfs_t` and configuration struct, a user can
format a block device or mount the filesystem.
Once mounted, the littlefs provides a full set of posix-like file and
Once mounted, the littlefs provides a full set of POSIX-like file and
directory functions, with the deviation that the allocation of filesystem
structures must be provided by the user.
All posix operations, such as remove and rename, are atomic, even in event
of power-loss. Additionally, no file updates are actually commited to the
All POSIX operations, such as remove and rename, are atomic, even in the event
of power loss. Additionally, no file updates are actually committed to the
filesystem until sync or close is called on the file.
## Other notes
@ -116,24 +116,24 @@ can be either one of those found in the `enum lfs_error` in [lfs.h](lfs.h),
or an error returned by the user's block device operations.
It should also be noted that the current implementation of littlefs doesn't
really do anything to insure that the data written to disk is machine portable.
really do anything to ensure that the data written to disk is machine portable.
This is fine as long as all of the involved machines share endianness
(little-endian) and don't have strange padding requirements.
## Reference material
[DESIGN.md](DESIGN.md) - DESIGN.md contains a fully detailed dive into how
littlefs actually works. I would encourage you to read it since the
littlefs actually works. We would encourage you to read it because the
solutions and tradeoffs at work here are quite interesting.
[SPEC.md](SPEC.md) - SPEC.md contains the on-disk specification of littlefs
with all the nitty-gritty details. Can be useful for developing tooling.
with all the nitty-gritty details. This can be useful for developing tooling.
## Testing
The littlefs comes with a test suite designed to run on a pc using the
The littlefs comes with a test suite designed to run on a PC using the
[emulated block device](emubd/lfs_emubd.h) found in the emubd directory.
The tests assume a linux environment and can be started with make:
The tests assume a Linux environment and can be started with make:
``` bash
make test
@ -142,15 +142,15 @@ make test
## Related projects
[mbed-littlefs](https://github.com/armmbed/mbed-littlefs) - The easiest way to
get started with littlefs is to jump into [mbed](https://os.mbed.com/), which
get started with littlefs is to jump into [Mbed](https://os.mbed.com/), which
already has block device drivers for most forms of embedded storage. The
mbed-littlefs provides the mbed wrapper for littlefs.
mbed-littlefs provides the Mbed wrapper for littlefs.
[littlefs-fuse](https://github.com/geky/littlefs-fuse) - A [FUSE](https://github.com/libfuse/libfuse)
wrapper for littlefs. The project allows you to mount littlefs directly in a
Linux machine. Can be useful for debugging littlefs if you have an SD card
handy.
[littlefs-js](https://github.com/geky/littlefs-js) - A javascript wrapper for
[littlefs-js](https://github.com/geky/littlefs-js) - A JavaScript wrapper for
littlefs. I'm not sure why you would want this, but it is handy for demos.
You can see it in action [here](http://littlefs.geky.net/demo.html).

View File

@ -3,8 +3,8 @@
This is the technical specification of the little filesystem. This document
covers the technical details of how the littlefs is stored on disk for
introspection and tooling development. This document assumes you are
familiar with the design of the littlefs, for more info on how littlefs
works check out [DESIGN.md](DESIGN.md).
familiar with the design of the littlefs. For more information on how littlefs
works, check out [DESIGN.md](DESIGN.md).
```
| | | .---._____
@ -17,23 +17,23 @@ works check out [DESIGN.md](DESIGN.md).
## Some important details
- The littlefs is a block-based filesystem. This is, the disk is divided into
- The littlefs is a block-based filesystem. The disk is divided into
an array of evenly sized blocks that are used as the logical unit of storage
in littlefs. Block pointers are stored in 32 bits.
- There is no explicit free-list stored on disk, the littlefs only knows what
- There is no explicit free-list stored on disk. The littlefs only knows what
is in use in the filesystem.
- The littlefs uses the value of 0xffffffff to represent a null block-pointer.
- All values in littlefs are stored in little-endian byte order.
## Directories / Metadata pairs
## Directories/Metadata pairs
Metadata pairs form the backbone of the littlefs and provide a system for
atomic updates. Even the superblock is stored in a metadata pair.
As their name suggests, a metadata pair is stored in two blocks, with one block
As its name suggests, a metadata pair is stored in two blocks, with one block
acting as a redundant backup in case the other is corrupted. These two blocks
could be anywhere in the disk and may not be next to each other, so any
pointers to directory pairs need to be stored as two block pointers.
@ -50,7 +50,7 @@ Here's the layout of metadata blocks on disk:
**Revision count** - Incremented every update, only the uncorrupted
metadata-block with the most recent revision count contains the valid metadata.
Comparison between revision counts must use sequence comparison since the
Comparison between revision counts must use sequence comparison because the
revision counts may overflow.
**Dir size** - Size in bytes of the contents in the current metadata block,
@ -61,12 +61,12 @@ next metadata-pair pointed to by the tail pointer.
**Tail pointer** - Pointer to the next metadata-pair in the filesystem.
A null pair-pointer (0xffffffff, 0xffffffff) indicates the end of the list.
If the highest bit in the dir size is set, this points to the next
metadata-pair in the current directory, otherwise it points to an arbitrary
metadata-pair in the current directory. Otherwise, it points to an arbitrary
metadata-pair. Starting with the superblock, the tail-pointers form a
linked-list containing all metadata-pairs in the filesystem.
**CRC** - 32 bit CRC used to detect corruption from power-lost, from block
end-of-life, or just from noise on the storage bus. The CRC is appended to
end-of-life or just from noise on the storage bus. The CRC is appended to
the end of each metadata-block. The littlefs uses the standard CRC-32, which
uses a polynomial of 0x04c11db7, initialized with 0xffffffff.
@ -90,9 +90,9 @@ Here's an example of a simple directory stored on disk:
```
A note about the tail pointer linked-list: Normally, this linked-list is
threaded through the entire filesystem. However, after power-loss this
threaded through the entire filesystem. However, after power loss, this
linked-list may become out of sync with the rest of the filesystem.
- The linked-list may contain a directory that has actually been removed
- The linked-list may contain a directory that has actually been removed.
- The linked-list may contain a metadata pair that has not been updated after
a block in the pair has gone bad.
@ -104,7 +104,7 @@ if littlefs is mounted read-only.
Each metadata block contains a series of entries that follow a standard
layout. An entry contains the type of the entry, along with a section for
entry-specific data, attributes, and a name.
entry-specific data, attributes and a name.
Here's the layout of entries on disk:
@ -119,9 +119,9 @@ Here's the layout of entries on disk:
| 0x4+e+a | name length bytes | entry name |
**Entry type** - Type of the entry, currently this is limited to the following:
- 0x11 - file entry
- 0x22 - directory entry
- 0x2e - superblock entry
- 0x11 - file entry.
- 0x22 - directory entry.
- 0x2e - superblock entry.
Additionally, the type is broken into two 4 bit nibbles, with the upper nibble
specifying the type's data structure used when scanning the filesystem. The
@ -134,17 +134,17 @@ filesystem. If the entry exists elsewhere, this entry must be treated as
though it does not exist.
**Entry length** - Length in bytes of the entry-specific data. This does
not include the entry type size, attributes, or name. The full size in bytes
of the entry is 4 + entry length + attribute length + name length.
not include the entry type size, attributes or name. The full size in bytes
of the entry is 4 plus entry length plus attribute length plus name length.
**Attribute length** - Length of system-specific attributes in bytes. Since
attributes are system specific, there is not much garuntee on the values in
**Attribute length** - Length of system-specific attributes in bytes. Because
attributes are system specific, there is not much guarantee on the values in
this section, and systems are expected to work even when it is empty. See the
[attributes](#entry-attributes) section for more details.
**Name length** - Length of the entry name. Entry names are stored as utf8,
although most systems will probably only support ascii. Entry names can not
contain '/' and can not be '.' or '..' as these are a part of the syntax of
though most systems will probably only support ascii. Entry names can not
contain '/' and can not be '.' or '..' because these are a part of the syntax of
filesystem paths.
Here's an example of a simple entry stored on disk:
@ -166,9 +166,9 @@ The superblock is the anchor for the littlefs. The superblock is stored as
a metadata pair containing a single superblock entry. It is through the
superblock that littlefs can access the rest of the filesystem.
The superblock can always be found in blocks 0 and 1, however fetching the
The superblock can always be found in blocks 0 and 1; however, fetching the
superblock requires knowing the block size. The block size can be guessed by
searching the beginning of disk for the string "littlefs", although currently
searching the beginning of disk for the string "littlefs", though currently,
the filesystems relies on the user providing the correct block size.
The superblock is the most valuable block in the filesystem. It is updated
@ -200,8 +200,8 @@ Here's the layout of the superblock entry:
**Version** - The littlefs version encoded as a 32 bit value. The upper 16 bits
encodes the major version, which is incremented when a breaking-change is
introduced in the filesystem specification. The lower 16 bits encodes the
minor version, which is incremented when a backwards-compatible change is
introduced. Non-standard Attribute changes do not change the version. This
minor version, which is incremented when a backward-compatible change is
introduced. Nonstandard Attribute changes do not change the version. This
specification describes version 1.1 (0x00010001), which is the first version
of littlefs.
@ -315,27 +315,27 @@ Here's an example of a file entry:
## Entry attributes
Each dir entry can have up to 256 bytes of system-specific attributes. Since
Each dir entry can have up to 256 bytes of system-specific attributes. Because
these attributes are system-specific, they may not be portable between
different systems. For this reason, all attributes must be optional. A minimal
littlefs driver must be able to get away with supporting no attributes at all.
For some level of portability, littlefs has a simple scheme for attributes.
Each attribute is prefixes with an 8-bit type that indicates what the attribute
Each attribute is prefixed with an 8-bit type that indicates what the attribute
is. The length of attributes may also be determined from this type. Attributes
in an entry should be sorted based on portability, since attribute parsing
in an entry should be sorted based on portability because attribute parsing
will end when it hits the first attribute it does not understand.
Each system should choose a 4-bit value to prefix all attribute types with to
avoid conflicts with other systems. Additionally, littlefs drivers that support
attributes should provide a "ignore attributes" flag to users in case attribute
attributes should provide an "ignore attributes" flag to users in case attribute
conflicts do occur.
Attribute types prefixes with 0x0 and 0xf are currently reserved for future
standard attributes. Standard attributes will be added to this document in
that case.
Here's an example of non-standard time attribute:
Here's an example of nonstandard time attribute:
```
(8 bits) attribute type = time (0xc1)
(72 bits) time in seconds = 1506286115 (0x0059c81a23)
@ -343,7 +343,7 @@ Here's an example of non-standard time attribute:
00000000: c1 23 1a c8 59 00 .#..Y.
```
Here's an example of non-standard permissions attribute:
Here's an example of nonstandard permissions attribute:
```
(8 bits) attribute type = permissions (0xc2)
(16 bits) permission bits = rw-rw-r-- (0x01b4)