aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorSteve Slaven <bpk@hoopajoo.net>2009-07-29 05:59:38 (GMT)
committerSteve Slaven <bpk@hoopajoo.net>2009-07-29 05:59:38 (GMT)
commitec1d71eadf323e9a2b16352a37b5acdae68e4987 (patch)
treec79ac59ad21a05e9fb07a88e8b9b7448301a38b0
parent614720058d96957be369769e6add7f556f642dc8 (diff)
downloadfusearchive-ec1d71eadf323e9a2b16352a37b5acdae68e4987.zip
fusearchive-ec1d71eadf323e9a2b16352a37b5acdae68e4987.tar.gz
fusearchive-ec1d71eadf323e9a2b16352a37b5acdae68e4987.tar.bz2
Some thoughts/info on stuff that should be done some day
-rw-r--r--README53
1 files changed, 53 insertions, 0 deletions
diff --git a/README b/README
new file mode 100644
index 0000000..dbf4488
--- /dev/null
+++ b/README
@@ -0,0 +1,53 @@
+So far this is just a copy of the nullfs example from
+/usr/share/doc/python-fuse with some stuff renamed
+
+To make it work:
+
+- How do you get another arg in the options?
+ - pydoc fuse shows some magic option parser stuff
+ - need this for the "source" directory, or backing storage area
+- Better to compress chunks? Or have a blob more like a zip?
+
+-----
+
+TODO:
+
++ Make inflate/deflate block based as needed, so we don't have to do a
+ bunch of work up front and waste a bunch of space on disk
+ - done
+- Make files just contain a backing storage key, this key will reference
+ what we have in it now (the data list and stat info) so that complete
+ duplicate files will not take up a few extra megs and still be able to
+ have their own permissions and stuff
++ Copying read-only files doens't work (permission denied on close, because
+ that is the point we are opening and writing to the original file)
+ - done - we open a file handle at __init__ now and use that
+- R/W is basically ignored at this point
+- fsck:
+ - test each chunk < last is a full block size (this would be a good
+ assert too)
+- delete unused chunks (refcounting)
+- pack multiple chunks in to "super chunks" like cromfs to get better
+ compression (e.g. 4M of data will compress better than that same file
+ split in to 4 1M pieces and compressed individually presumably)
+
+-----
+
+Other thoughts:
+
+- If there was an easy way to "open a file" or something and have it
+ "touch" all it's pieces, you could just run that in the mounted tree
+ then "find storage/ -mtime +1" and delete that stuff to clean out cruft
+- Alternatively have it keep track of block usage counts and when it goes
+ to "zero" then delete it
+ - Change load/save to be ref counted? Or have another method for
+ "release" and "lock" to say "Yeah I'm using this" or "This is garbage
+ now?"
+- Possibly better compression to be had if you use a squashfs sort of block
+ of blocks. So you get redundancy of small blocks (32k or whatever) and
+ pack those together in to big blocks (say 2-4M) then compress the big
+ block. That way you get better compression in the big block. The
+ question is if this constant inflating an deflating of blocks will be too
+ much of a performance hit
+ - Maybe have a "working set" of pre-expanded sub blocks? And
+ automatically freeze out blocks when all the files are closed?