feat: RIFE and FILM frame interpolation model support (CORE-29)#13258
feat: RIFE and FILM frame interpolation model support (CORE-29)#13258Kosinkadink merged 14 commits intoComfy-Org:masterfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds frame interpolation support via a new extension module 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 107-110: The padding call on frames using F.pad with
mode="reflect" can fail when pad_h >= H or pad_w >= W (e.g., 16×64 or 32×32);
update the logic around pad_h/pad_w calculation so that before calling F.pad you
choose mode = "reflect" if pad_h < H and pad_w < W, otherwise use mode =
"replicate" (or another non-reflect fallback) and pass that mode into
F.pad(frames, (0, pad_w, 0, pad_h), mode=mode) so the function (and the
variables pad_h, pad_w, H, W and the F.pad call) handles small-frame cases
safely.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b9519d0a-33fd-4ed3-8b9b-b8a2b44211ad
📒 Files selected for processing (4)
comfy_extras/nodes_frame_interpolation.pycomfy_extras/rife_model/ifnet.pyfolder_paths.pynodes.py
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 161-167: The multi_fn branch (where multi_fn is used to compute
mids in forward_multi_timestep()) currently asks for all timestep outputs at
once and bypasses the OOM-aware batch-halving fallback; update this branch to
catch CUDA OOM (or memory errors) from multi_fn/forward_multi_timestep and retry
by splitting t_values into smaller chunks (e.g., half the size repeatedly) and
calling multi_fn/forward_multi_timestep per-chunk, copying each chunk into
result as done now, using the same dtype/non_blocking logic and pbar/tqdm
updates; ensure you clear caches between retries (feat_cache or
torch.cuda.empty_cache()) and propagate the original error if even single-step
chunks fail.
- Around line 151-189: The inference loop currently calls inference_model and
inference_model.extract_features while only setting eval(), which still builds
autograd graphs and retains tensors in feat_cache; wrap the loop that iterates
over frame pairs (the try: for i in range(total_pairs): ... finally:) in
torch.inference_mode() so all calls to extract_features, inference_model(...)
and multi_fn(...) run without tracking gradients, preventing unnecessary graph
retention and reducing memory/OOM issues; update any context usage around
feat_cache, multi_fn, and inference_model to execute inside the inference_mode
block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 43f05f12-4f18-425e-bfd3-3376c4458e9f
📒 Files selected for processing (3)
comfy_extras/frame_interpolation_models/film_net.pycomfy_extras/frame_interpolation_models/ifnet.pycomfy_extras/nodes_frame_interpolation.py
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 168-180: The OOM fallback in the multi_fn branch incorrectly uses
"continue", which skips writing outputs for the current pair (variables:
multi_fn, result, out_idx, num_interp) and moves to the next pair; instead,
catch model_management.OOM_EXCEPTION, call model_management.soft_empty_cache(),
set multi_fn = None, and do NOT continue so the loop falls through and retries
the same pair using the single-timestep path (i.e., let the subsequent
single-step interpolation logic run for the current i rather than skipping it).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e695b624-086d-4151-9510-1abfe96d97d3
📒 Files selected for processing (2)
comfy_extras/frame_interpolation_models/film_net.pycomfy_extras/nodes_frame_interpolation.py
There was a problem hiding this comment.
♻️ Duplicate comments (1)
comfy_extras/nodes_frame_interpolation.py (1)
177-181:⚠️ Potential issue | 🔴 CriticalOOM fallback skips writing the current pair's original frame.
When
multi_fnraises OOM, thecontinueat line 181 skips the rest of the loop body including line 201-202 which writesimages[i + 1]to the result. This causes frame loss—the interpolated frames for pairiare not written (expected, we want to retry), but the original frameimages[i + 1]is also skipped (bug).The intended behavior should fall through to process the current pair with single-timestep calls instead of skipping to the next pair.
Proposed fix
if multi_fn is not None: # Models with timestep-independent flow can compute it once for all timesteps try: mids = multi_fn(img0_single, img1_single, t_values, cache=feat_cache) result[out_idx:out_idx + num_interp] = mids[:, :, :H, :W].to(out_dtype) out_idx += num_interp pbar.update(num_interp) tqdm_bar.update(num_interp) except model_management.OOM_EXCEPTION: # Fall back to single-timestep calls for this and subsequent pairs model_management.soft_empty_cache() multi_fn = None - continue - else: + # Fall through to process current pair with single-timestep path + + if multi_fn is None: j = 0 while j < num_interp:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@comfy_extras/nodes_frame_interpolation.py` around lines 177 - 181, The OOM exception handler currently sets multi_fn = None then uses "continue", which skips the remainder of the loop and prevents writing the original frame (images[i + 1]); remove the "continue" so execution falls through and the loop will process the current pair using the single-timestep path. Specifically, in the except model_management.OOM_EXCEPTION block (where model_management.soft_empty_cache() and multi_fn = None are set), delete the "continue" and ensure subsequent logic checks multi_fn (now None) and executes the single-timestep interpolation and writes images[i + 1] to the result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 177-181: The OOM exception handler currently sets multi_fn = None
then uses "continue", which skips the remainder of the loop and prevents writing
the original frame (images[i + 1]); remove the "continue" so execution falls
through and the loop will process the current pair using the single-timestep
path. Specifically, in the except model_management.OOM_EXCEPTION block (where
model_management.soft_empty_cache() and multi_fn = None are set), delete the
"continue" and ensure subsequent logic checks multi_fn (now None) and executes
the single-timestep interpolation and writes images[i + 1] to the result.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: e27f8a83-58a9-4a47-ab92-5dfb86d9f203
📒 Files selected for processing (2)
comfy_extras/nodes_frame_interpolation.pymodels/frame_interpolation/put_frame_interpolation_models_here
|
Tested the code on a windows machine, seems good. One change I'd request is to remove torch compile from the node. We can worry about that later in terms of compatibility; it's removal will simplify the merge of this PR. |
|
ვინ ხარ ?
ორშ, 13 აპრ. 2026, 16:10-ში Jedrzej Kosinski-მა ***@***.***>
დაწერა:
… *Kosinkadink* left a comment (Comfy-Org/ComfyUI#13258)
<#13258 (comment)>
Tested the code on a windows machine, seems good.
One change I'd request is to remove torch compile from the node. We can
worry about that later in terms of compatibility; it's removal will
simplify the merge of this PR.
—
Reply to this email directly, view it on GitHub
<#13258 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BTI3FEZQDJ2PW7LRE5W2CR34VTKLZAVCNFSM6AAAAACXKZ7MESVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DEMZWGI2DSMBRGQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
|
და მე რატომ მომდის ეს შეყობინებები ?
ორშ, 13 აპრ. 2026, 16:11-ში Gogita Tsintsadze-მა ***@***.***>
დაწერა:
… ვინ ხარ ?
ორშ, 13 აპრ. 2026, 16:10-ში Jedrzej Kosinski-მა ***@***.***>
დაწერა:
> *Kosinkadink* left a comment (Comfy-Org/ComfyUI#13258)
> <#13258 (comment)>
>
> Tested the code on a windows machine, seems good.
>
> One change I'd request is to remove torch compile from the node. We can
> worry about that later in terms of compatibility; it's removal will
> simplify the merge of this PR.
>
> —
> Reply to this email directly, view it on GitHub
> <#13258 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/BTI3FEZQDJ2PW7LRE5W2CR34VTKLZAVCNFSM6AAAAACXKZ7MESVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHM2DEMZWGI2DSMBRGQ>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***>
>
|
|
@kijai was about to merge, but realized something - do we have any other nodes in core that have a 'model' name for input that aren't a MODEL type? If not, maybe that input can be called interp_model instead. Comparison with LATENT_UPSCALE_MODEL: |
feat: RIFE and FILM frame interpolation model support (CORE-29) (Comfy-Org#13258)
* fix: pin SQLAlchemy>=2.0 in requirements.txt (fixes #13036) (#13316) * Refactor io to IO in nodes_ace.py (#13485) * Bump comfyui-frontend-package to 1.42.12 (#13489) * Make the ltx audio vae more native. (#13486) * feat(api-nodes): add automatic downscaling of videos for ByteDance 2 nodes (#13465) * Support standalone LTXV audio VAEs (#13499) * [Partner Nodes] added 4K resolution for Veo models; added Veo 3 Lite model (#13330) * feat(api nodes): added 4K resolution for Veo models; added Veo 3 Lite model Signed-off-by: bigcat88 <bigcat88@icloud.com> * increase poll_interval from 5 to 9 --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com> * Bump comfyui-frontend-package to 1.42.14 (#13493) * Add gpt-image-2 as version option (#13501) * Allow logging in comfy app files. (#13505) * chore: update workflow templates to v0.9.59 (#13507) * fix(veo): reject 4K resolution for veo-3.0 models in Veo3VideoGenerationNode (#13504) The tooltip on the resolution input states that 4K is not available for veo-3.1-lite or veo-3.0 models, but the execute guard only rejected the lite combination. Selecting 4K with veo-3.0-generate-001 or veo-3.0-fast-generate-001 would fall through and hit the upstream API with an invalid request. Broaden the guard to match the documented behavior and update the error message accordingly. Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com> * feat: RIFE and FILM frame interpolation model support (CORE-29) (#13258) * initial RIFE support * Also support FILM * Better RAM usage, reduce FILM VRAM peak * Add model folder placeholder * Fix oom fallback frame loss * Remove torch.compile for now * Rename model input * Shorter input type name --------- * fix: use Parameter assignment for Stable_Zero123 cc_projection weights (fixes #13492) (#13518) On Windows with aimdo enabled, disable_weight_init.Linear uses lazy initialization that sets weight and bias to None to avoid unnecessary memory allocation. This caused a crash when copy_() was called on the None weight attribute in Stable_Zero123.__init__. Replace copy_() with direct torch.nn.Parameter assignment, which works correctly on both Windows (aimdo enabled) and other platforms. * Derive InterruptProcessingException from BaseException (#13523) * bump manager version to 4.2.1 (#13516) * ModelPatcherDynamic: force cast stray weights on comfy layers (#13487) the mixed_precision ops can have input_scale parameters that are used in tensor math but arent a weight or bias so dont get proper VRAM management. Treat these as force-castable parameters like the non comfy weight, random params are buffers already are. * Update logging level for invalid version format (#13526) * [Partner Nodes] add SD2 real human support (#13509) * feat(api-nodes): add SD2 real human support Signed-off-by: bigcat88 <bigcat88@icloud.com> * fix: add validation before uploading Assets Signed-off-by: bigcat88 <bigcat88@icloud.com> * Add asset_id and group_id displaying on the node Signed-off-by: bigcat88 <bigcat88@icloud.com> * extend poll_op to use instead of custom async cycle Signed-off-by: bigcat88 <bigcat88@icloud.com> * added the polling for the "Active" status after asset creation Signed-off-by: bigcat88 <bigcat88@icloud.com> * updated tooltip for group_id * allow usage of real human in the ByteDance2FirstLastFrame node * add reference count limits * corrected price in status when input assets contain video Signed-off-by: bigcat88 <bigcat88@icloud.com> --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> * feat: SAM (segment anything) 3.1 support (CORE-34) (#13408) * [Partner Nodes] GPTImage: fix price badges, add new resolutions (#13519) * fix(api-nodes): fixed price badges, add new resolutions Signed-off-by: bigcat88 <bigcat88@icloud.com> * proper calculate the total run cost when "n > 1" Signed-off-by: bigcat88 <bigcat88@icloud.com> --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> * chore: update workflow templates to v0.9.61 (#13533) * chore: update embedded docs to v0.4.4 (#13535) * add 4K resolution to Kling nodes (#13536) Signed-off-by: bigcat88 <bigcat88@icloud.com> * Fix LTXV Reference Audio node (#13531) * comfy-aimdo 0.2.14: Hotfix async allocator estimations (#13534) This was doing an over-estimate of VRAM used by the async allocator when lots of little small tensors were in play. Also change the versioning scheme to == so we can roll forward aimdo without worrying about stable regressions downstream in comfyUI core. * Disable sageattention for SAM3 (#13529) Causes Nans * execution: Add anti-cycle validation (#13169) Currently if the graph contains a cycle, the just inifitiate recursions, hits a catch all then throws a generic error against the output node that seeded the validation. Instead, fail the offending cycling mode chain and handlng it as an error in its own right. Co-authored-by: guill <jacob.e.segal@gmail.com> * chore: update workflow templates to v0.9.62 (#13539) --------- Signed-off-by: bigcat88 <bigcat88@icloud.com> Co-authored-by: Octopus <liyuan851277048@icloud.com> Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com> Co-authored-by: Comfy Org PR Bot <snomiao+comfy-pr@gmail.com> Co-authored-by: Alexander Piskun <13381981+bigcat88@users.noreply.github.com> Co-authored-by: Jukka Seppänen <40791699+kijai@users.noreply.github.com> Co-authored-by: AustinMroz <austin@comfy.org> Co-authored-by: Daxiong (Lin) <contact@comfyui-wiki.com> Co-authored-by: Matt Miller <matt@miller-media.com> Co-authored-by: blepping <157360029+blepping@users.noreply.github.com> Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com> Co-authored-by: rattus <46076784+rattus128@users.noreply.github.com> Co-authored-by: guill <jacob.e.segal@gmail.com>

Adds pytorch only optimized support for RIFE (MIT) and FILM (Apache 2.0) video frame interpolation models:
https://huggingface.co/Comfy-Org/frame_interpolation/tree/main/frame_interpolation