forked from BlueAmulet/ESRGAN
-
Notifications
You must be signed in to change notification settings - Fork 42
Open
Description
This code is pretty clearly assuming CUDA:
Lines 44 to 55 in b13baab
| try: | |
| result = upscale_function(lr_img) | |
| return result, current_depth | |
| except RuntimeError as e: | |
| # Check to see if its actually the CUDA out of memory error | |
| if "CUDA" in str(e): | |
| # Collect garbage (clear VRAM) | |
| torch.cuda.empty_cache() | |
| gc.collect() | |
| # Re-raise the exception if not an OOM error | |
| else: | |
| raise RuntimeError(e) |
When doing CPU inference (at least on most Linux systems), an out-of-memory error won't result in a CUDA exception, it will just result in a process (probably Python, but maybe some random other process on the system) being killed. On Windows, it's even worse; the entire system is likely to lock up and require a hard restart.
I see two possible approaches to fix it:
- Allow the user to choose to explicitly provide a tile size, as upstream Real-ESRGAN's inference code does.
- Use some heuristic to detect high (but not critical) RAM usage, e.g. checking the RAM size of the machine and comparing it to system RAM+swap usage, and ramp up the tile size until the RAM usage gets too high.
Option 2 seems very messy to me, and I suspect it will not yield optimal results.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels