Skip to content

FUSE micro-opt benchmarking #5110

@ThomasWaldmann

Description

@ThomasWaldmann

If somebody has some time for FUSE benchmarking:

diff --git a/src/borg/fuse.py b/src/borg/fuse.py
index 429790e4..27ab1c1a 100644
--- a/src/borg/fuse.py
+++ b/src/borg/fuse.py
@@ -644,12 +644,12 @@ def read(self, fh, offset, size):
                 data = self.data_cache[id]
                 if offset + n == len(data):
                     # evict fully read chunk from cache
-                    del self.data_cache[id]
+                    pass # del self.data_cache[id]
             else:
                 data = self.key.decrypt(id, self.repository_uncached.get(id))
-                if offset + n < len(data):
+                if True: # offset + n < len(data):
                     # chunk was only partially read, cache it
                     self.data_cache[id] = data
             parts.append(data[offset:offset + n])
             offset = 0
             size -= n

The 2 changes remove selective caching only of partially read chunks and cache removal of fully read chunks. While this sounds obvious when thinking about sequential reads, it maybe is counterproductive for repeating chunks (like all-zero chunks).

If someone wants to benchmark these (and maybe also try with a bigger sized self.data_cache), that would be helpful!

Try:

  • big files, small files
  • files with repeating chunks (like sparse [VM] disk images)
  • default chunksize, small chunksize

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions