diff options
author | Steve Slaven <bpk@hoopajoo.net> | 2009-07-29 17:29:17 (GMT) |
---|---|---|
committer | Steve Slaven <bpk@hoopajoo.net> | 2009-07-29 17:29:17 (GMT) |
commit | 0434abd9f4ee40d79a89f09deda971e9366ef335 (patch) | |
tree | d2664d69ae27fe578a3c2c1fb675ea9b251cf7e0 | |
parent | c6894243fd5255ada2a0f3eaf680c19276d2293e (diff) | |
download | fusearchive-0434abd9f4ee40d79a89f09deda971e9366ef335.zip fusearchive-0434abd9f4ee40d79a89f09deda971e9366ef335.tar.gz fusearchive-0434abd9f4ee40d79a89f09deda971e9366ef335.tar.bz2 |
More notes/thoughts
-rw-r--r-- | README | 9 |
1 files changed, 8 insertions, 1 deletions
@@ -27,9 +27,10 @@ TODO: - test each chunk < last is a full block size (this would be a good assert too) - delete unused chunks (refcounting) -- pack multiple chunks in to "super chunks" like cromfs to get better +- pack multiple chunks in to "super chunks" like cromfs/squashfs to get better compression (e.g. 4M of data will compress better than that same file split in to 4 1M pieces and compressed individually presumably) +- Speed it up? Or is it "fast enough" ----- @@ -51,3 +52,9 @@ Other thoughts: much of a performance hit - Maybe have a "working set" of pre-expanded sub blocks? And automatically freeze out blocks when all the files are closed? +- This might work well over a remote link for random-access to large files + using sshfs or ftpfs or something since you don't have to download the + whole original file to get chunks out, you download the index then just + the chunks you want +- Get rid of cpickle, it's way more than we need for saving essentially a + few ints and a block list even though it is very convenient |