aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README9
1 files changed, 8 insertions, 1 deletions
diff --git a/README b/README
index dbf4488..c8bea43 100644
--- a/README
+++ b/README
@@ -27,9 +27,10 @@ TODO:
- test each chunk < last is a full block size (this would be a good
assert too)
- delete unused chunks (refcounting)
-- pack multiple chunks in to "super chunks" like cromfs to get better
+- pack multiple chunks in to "super chunks" like cromfs/squashfs to get better
compression (e.g. 4M of data will compress better than that same file
split in to 4 1M pieces and compressed individually presumably)
+- Speed it up? Or is it "fast enough"
-----
@@ -51,3 +52,9 @@ Other thoughts:
much of a performance hit
- Maybe have a "working set" of pre-expanded sub blocks? And
automatically freeze out blocks when all the files are closed?
+- This might work well over a remote link for random-access to large files
+ using sshfs or ftpfs or something since you don't have to download the
+ whole original file to get chunks out, you download the index then just
+ the chunks you want
+- Get rid of cpickle, it's way more than we need for saving essentially a
+ few ints and a block list even though it is very convenient