path: root/README
diff options
authorSteve Slaven <bpk@hoopajoo.net>2009-07-29 17:29:17 (GMT)
committerSteve Slaven <bpk@hoopajoo.net>2009-07-29 17:29:17 (GMT)
commit0434abd9f4ee40d79a89f09deda971e9366ef335 (patch)
treed2664d69ae27fe578a3c2c1fb675ea9b251cf7e0 /README
parentc6894243fd5255ada2a0f3eaf680c19276d2293e (diff)
More notes/thoughts
Diffstat (limited to 'README')
1 files changed, 8 insertions, 1 deletions
diff --git a/README b/README
index dbf4488..c8bea43 100644
--- a/README
+++ b/README
@@ -27,9 +27,10 @@ TODO:
- test each chunk < last is a full block size (this would be a good
assert too)
- delete unused chunks (refcounting)
-- pack multiple chunks in to "super chunks" like cromfs to get better
+- pack multiple chunks in to "super chunks" like cromfs/squashfs to get better
compression (e.g. 4M of data will compress better than that same file
split in to 4 1M pieces and compressed individually presumably)
+- Speed it up? Or is it "fast enough"
@@ -51,3 +52,9 @@ Other thoughts:
much of a performance hit
- Maybe have a "working set" of pre-expanded sub blocks? And
automatically freeze out blocks when all the files are closed?
+- This might work well over a remote link for random-access to large files
+ using sshfs or ftpfs or something since you don't have to download the
+ whole original file to get chunks out, you download the index then just
+ the chunks you want
+- Get rid of cpickle, it's way more than we need for saving essentially a
+ few ints and a block list even though it is very convenient