Skip to content

Switch to using total VRAM instead of free VRAM to estimate tile size#2929

Merged
joeyballentine merged 2 commits intomainfrom
use-total-vram
Jun 2, 2024
Merged

Switch to using total VRAM instead of free VRAM to estimate tile size#2929
joeyballentine merged 2 commits intomainfrom
use-total-vram

Conversation

@joeyballentine
Copy link
Member

We recently switched to not clearing out the cuda cache after each upscale and rather at the end of the chain. This means that pytorch now keeps VRAM usage somewhat high. This is fine though, because this is actually PyTorch's intended behavior to improve performance. Even though it is capturing this VRAM and it looks like its in use, PyTorch is able to reuse that memory for allocating tensors. In fact users should not be using clear cache at all, apparently.

However, since it looks like this vram is in use and unable to be used, we need to switch to using total system VRAM for estimations rather than free VRAM, since otherwise we might be estimating incorrectly.

Since the total amount is likely have some usage already, i lowered the budget calculation a bit to compensate.

@joeyballentine joeyballentine merged commit 8caf78b into main Jun 2, 2024
@joeyballentine joeyballentine deleted the use-total-vram branch June 2, 2024 22:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants