Skip to content

comfy-aimdo 0.2.14: Hotfix async allocator estimations#13534

Merged
comfyanonymous merged 1 commit intoComfy-Org:masterfrom
rattus128:prs/aimdo-0-2-14
Apr 23, 2026
Merged

comfy-aimdo 0.2.14: Hotfix async allocator estimations#13534
comfyanonymous merged 1 commit intoComfy-Org:masterfrom
rattus128:prs/aimdo-0-2-14

Conversation

@rattus128
Copy link
Copy Markdown
Contributor

This was doing an over-estimate of VRAM used by the async allocator when lots of little small tensors were in play.

Also change the versioning scheme to == so we can roll forward aimdo without worrying about stable regressions downstream in comfyUI core.

This is the deeper root cause fix for a reported regression in LTX2.3 WRT: #13487

Example Test Conditions:

Linux, RTX5090, LTX2.3 T2V (unchanged default workflow)

image

Before:

Requested to load LTXAV
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 8/8 [00:07<00:00,  1.11it/s]                                   
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 3/3 [00:13<00:00,  4.58s/it]                                   
Requested to load AudioVAE
loaded completely;  693.46 MB loaded, full load: True
Requested to load VideoVAE
0 models unloaded.
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
Prompt executed in 34.67 seconds
got prompt
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 8/8 [00:07<00:00,  1.03it/s]                                   
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 3/3 [00:15<00:00,  5.02s/it]                                   
0 models unloaded.
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
Prompt executed in 28.97 seconds

After:

...
Requested to load LTXAV
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 8/8 [00:07<00:00,  1.11it/s]                                   
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 3/3 [00:13<00:00,  4.58s/it]                                   
Requested to load AudioVAE
loaded completely;  693.46 MB loaded, full load: True
Requested to load VideoVAE
0 models unloaded.
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
got prompt
Prompt executed in 34.53 seconds
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 8/8 [00:07<00:00,  1.10it/s]                                   
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached. Force pre-loaded 1496 weights: 44 KB.
100%|██████████| 3/3 [00:13<00:00,  4.60s/it]                                   
0 models unloaded.
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
Prompt executed in 26.66 seconds

VS revert of #13487 for comparison:

Requested to load LTXAV
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached.
100%|██████████| 8/8 [00:07<00:00,  1.11it/s]                                   
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached.
100%|██████████| 3/3 [00:13<00:00,  4.59s/it]                                   
Requested to load AudioVAE
loaded completely;  693.46 MB loaded, full load: True
Requested to load VideoVAE
0 models unloaded.
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
Prompt executed in 34.35 seconds
got prompt
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached.
100%|██████████| 8/8 [00:07<00:00,  1.10it/s]                                   
Model LTXAV prepared for dynamic VRAM loading. 23838MB Staged. 1660 patches attached.
100%|██████████| 3/3 [00:13<00:00,  4.61s/it]                                   
0 models unloaded.
Model VideoVAE prepared for dynamic VRAM loading. 1384MB Staged. 0 patches attached.
Prompt executed in 26.82 seconds

Regression Tests:

Windows RTX5060, 64GB RAM Flux2 dev FP8 ✅
Windows RTX5060, 64GB RAM LTX2.0 ✅
Windows RTX5060, 64GB wan 2.2 fp16 ✅
Linux RTX5090, Ace-step 1.5 XL turbo ✅
Linux RTX5090, 96GB flux fill inpaint ✅

This was doing an over-estimate of VRAM used by the async allocator when lots
of little small tensors were in play.

Also change the versioning scheme to == so we can roll forward aimdo without
worrying about stable regressions downstream in comfyUI core.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 23, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: aae03e51-f0cb-48da-a3af-88e495752bdf

📥 Commits

Reviewing files that changed from the base of the PR and between 3cdc0d5 and f1d0eb5.

📒 Files selected for processing (1)
  • requirements.txt

📝 Walkthrough

Walkthrough

The requirements.txt file was modified to change the version constraint for the comfy-aimdo package from >=0.2.12 to ==0.2.14. This change pins the dependency to a specific version rather than allowing any version at or above the minimum specified version.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: updating comfy-aimdo to version 0.2.14 to fix async allocator VRAM estimation issues.
Description check ✅ Passed The description clearly explains the fix for VRAM over-estimation, the versioning scheme change, and includes test results demonstrating the improvement.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@comfyanonymous comfyanonymous merged commit ef8f3cb into Comfy-Org:master Apr 23, 2026
14 of 22 checks passed
Kosinkadink added a commit that referenced this pull request Apr 24, 2026
* fix: pin SQLAlchemy>=2.0 in requirements.txt (fixes #13036) (#13316)

* Refactor io to IO in nodes_ace.py (#13485)

* Bump comfyui-frontend-package to 1.42.12 (#13489)

* Make the ltx audio vae more native. (#13486)

* feat(api-nodes): add automatic downscaling of videos for ByteDance 2 nodes (#13465)

* Support standalone LTXV audio VAEs (#13499)

* [Partner Nodes]  added 4K resolution for Veo models; added Veo 3 Lite model (#13330)

* feat(api nodes): added 4K resolution for Veo models; added Veo 3 Lite model

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* increase poll_interval from 5 to 9

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>

* Bump comfyui-frontend-package to 1.42.14 (#13493)

* Add gpt-image-2 as version option (#13501)

* Allow logging in comfy app files. (#13505)

* chore: update workflow templates to v0.9.59 (#13507)

* fix(veo): reject 4K resolution for veo-3.0 models in Veo3VideoGenerationNode (#13504)

The tooltip on the resolution input states that 4K is not available for
veo-3.1-lite or veo-3.0 models, but the execute guard only rejected the
lite combination. Selecting 4K with veo-3.0-generate-001 or
veo-3.0-fast-generate-001 would fall through and hit the upstream API
with an invalid request.

Broaden the guard to match the documented behavior and update the error
message accordingly.

Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>

* feat: RIFE and FILM frame interpolation model support (CORE-29) (#13258)

* initial RIFE support

* Also support FILM

* Better RAM usage, reduce FILM VRAM peak

* Add model folder placeholder

* Fix oom fallback frame loss

* Remove torch.compile for now

* Rename model input

* Shorter input type name

---------

* fix: use Parameter assignment for Stable_Zero123 cc_projection weights (fixes #13492) (#13518)

On Windows with aimdo enabled, disable_weight_init.Linear uses lazy
initialization that sets weight and bias to None to avoid unnecessary
memory allocation. This caused a crash when copy_() was called on the
None weight attribute in Stable_Zero123.__init__.

Replace copy_() with direct torch.nn.Parameter assignment, which works
correctly on both Windows (aimdo enabled) and other platforms.

* Derive InterruptProcessingException from BaseException (#13523)

* bump manager version to 4.2.1 (#13516)

* ModelPatcherDynamic: force cast stray weights on comfy layers (#13487)

the mixed_precision ops can have input_scale parameters that are used
in tensor math but arent a weight or bias so dont get proper VRAM
management. Treat these as force-castable parameters like the non comfy
weight, random params are buffers already are.

* Update logging level for invalid version format (#13526)

* [Partner Nodes] add SD2 real human support (#13509)

* feat(api-nodes): add SD2 real human support

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* fix: add validation before uploading Assets

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* Add asset_id and group_id displaying on the node

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* extend poll_op to use instead of custom async cycle

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* added the polling for the "Active" status after asset creation

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* updated tooltip for group_id

* allow usage of real human in the ByteDance2FirstLastFrame node

* add reference count limits

* corrected price in status when input assets contain video

Signed-off-by: bigcat88 <bigcat88@icloud.com>

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* feat: SAM (segment anything) 3.1 support (CORE-34) (#13408)

* [Partner Nodes] GPTImage: fix price badges, add new resolutions (#13519)

* fix(api-nodes): fixed price badges, add new resolutions

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* proper calculate the total run cost when "n > 1"

Signed-off-by: bigcat88 <bigcat88@icloud.com>

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* chore: update workflow templates to v0.9.61 (#13533)

* chore: update embedded docs to v0.4.4 (#13535)

* add 4K resolution to Kling nodes (#13536)

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* Fix LTXV Reference Audio node (#13531)

* comfy-aimdo 0.2.14: Hotfix async allocator estimations (#13534)

This was doing an over-estimate of VRAM used by the async allocator when lots
of little small tensors were in play.

Also change the versioning scheme to == so we can roll forward aimdo without
worrying about stable regressions downstream in comfyUI core.

* Disable sageattention for SAM3 (#13529)

Causes Nans

* execution: Add anti-cycle validation (#13169)

Currently if the graph contains a cycle, the just inifitiate recursions,
hits a catch all then throws a generic error against the output node
that seeded the validation. Instead, fail the offending cycling mode
chain and handlng it as an error in its own right.

Co-authored-by: guill <jacob.e.segal@gmail.com>

* chore: update workflow templates to v0.9.62 (#13539)

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>
Co-authored-by: Octopus <liyuan851277048@icloud.com>
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Co-authored-by: Comfy Org PR Bot <snomiao+comfy-pr@gmail.com>
Co-authored-by: Alexander Piskun <13381981+bigcat88@users.noreply.github.com>
Co-authored-by: Jukka Seppänen <40791699+kijai@users.noreply.github.com>
Co-authored-by: AustinMroz <austin@comfy.org>
Co-authored-by: Daxiong (Lin) <contact@comfyui-wiki.com>
Co-authored-by: Matt Miller <matt@miller-media.com>
Co-authored-by: blepping <157360029+blepping@users.noreply.github.com>
Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
Co-authored-by: rattus <46076784+rattus128@users.noreply.github.com>
Co-authored-by: guill <jacob.e.segal@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants