blob: dbf4488c3b7ae6aa673aa4874ad56c2292f25c9e (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
|
So far this is just a copy of the nullfs example from
/usr/share/doc/python-fuse with some stuff renamed
To make it work:
- How do you get another arg in the options?
- pydoc fuse shows some magic option parser stuff
- need this for the "source" directory, or backing storage area
- Better to compress chunks? Or have a blob more like a zip?
-----
TODO:
+ Make inflate/deflate block based as needed, so we don't have to do a
bunch of work up front and waste a bunch of space on disk
- done
- Make files just contain a backing storage key, this key will reference
what we have in it now (the data list and stat info) so that complete
duplicate files will not take up a few extra megs and still be able to
have their own permissions and stuff
+ Copying read-only files doens't work (permission denied on close, because
that is the point we are opening and writing to the original file)
- done - we open a file handle at __init__ now and use that
- R/W is basically ignored at this point
- fsck:
- test each chunk < last is a full block size (this would be a good
assert too)
- delete unused chunks (refcounting)
- pack multiple chunks in to "super chunks" like cromfs to get better
compression (e.g. 4M of data will compress better than that same file
split in to 4 1M pieces and compressed individually presumably)
-----
Other thoughts:
- If there was an easy way to "open a file" or something and have it
"touch" all it's pieces, you could just run that in the mounted tree
then "find storage/ -mtime +1" and delete that stuff to clean out cruft
- Alternatively have it keep track of block usage counts and when it goes
to "zero" then delete it
- Change load/save to be ref counted? Or have another method for
"release" and "lock" to say "Yeah I'm using this" or "This is garbage
now?"
- Possibly better compression to be had if you use a squashfs sort of block
of blocks. So you get redundancy of small blocks (32k or whatever) and
pack those together in to big blocks (say 2-4M) then compress the big
block. That way you get better compression in the big block. The
question is if this constant inflating an deflating of blocks will be too
much of a performance hit
- Maybe have a "working set" of pre-expanded sub blocks? And
automatically freeze out blocks when all the files are closed?
|