Skip to content

feat: RIFE and FILM frame interpolation model support (CORE-29)#13258

Merged
Kosinkadink merged 14 commits intoComfy-Org:masterfrom
kijai:rife
Apr 22, 2026
Merged

feat: RIFE and FILM frame interpolation model support (CORE-29)#13258
Kosinkadink merged 14 commits intoComfy-Org:masterfrom
kijai:rife

Conversation

@kijai
Copy link
Copy Markdown
Contributor

@kijai kijai commented Apr 2, 2026

Adds pytorch only optimized support for RIFE (MIT) and FILM (Apache 2.0) video frame interpolation models:

https://huggingface.co/Comfy-Org/frame_interpolation/tree/main/frame_interpolation

image

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 2, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds frame interpolation support via a new extension module comfy_extras/nodes_frame_interpolation.py that registers two nodes (FrameInterpolationModelLoader, FrameInterpolate) and an entrypoint. Introduces two model implementations (FILMNet in comfy_extras/frame_interpolation_models/film_net.py, IFNet in comfy_extras/frame_interpolation_models/ifnet.py) and a FrameInterpolationModel I/O type. Model loader searches models/frame_interpolation, detects FILM vs IFNet weights, normalizes state-dict keys, instantiates and loads the appropriate model, and returns a ModelPatcher. The interpolation node handles padding, optional torch.compile, per-pair multi-timestep inference (with fallback), dtype/device selection, CUDA OOM retry logic, and returns the interpolated sequence. folder_paths.py and builtin extra node imports are updated.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 8.20% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly summarizes the main change: adding RIFE and FILM frame interpolation model support, which matches the primary purpose of the pull request.
Description check ✅ Passed The PR description directly relates to the changeset, describing the addition of RIFE and FILM frame interpolation model support with PyTorch optimization.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 107-110: The padding call on frames using F.pad with
mode="reflect" can fail when pad_h >= H or pad_w >= W (e.g., 16×64 or 32×32);
update the logic around pad_h/pad_w calculation so that before calling F.pad you
choose mode = "reflect" if pad_h < H and pad_w < W, otherwise use mode =
"replicate" (or another non-reflect fallback) and pass that mode into
F.pad(frames, (0, pad_w, 0, pad_h), mode=mode) so the function (and the
variables pad_h, pad_w, H, W and the F.pad call) handles small-frame cases
safely.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: b9519d0a-33fd-4ed3-8b9b-b8a2b44211ad

📥 Commits

Reviewing files that changed from the base of the PR and between 0c63b4f and a859152.

📒 Files selected for processing (4)
  • comfy_extras/nodes_frame_interpolation.py
  • comfy_extras/rife_model/ifnet.py
  • folder_paths.py
  • nodes.py

Comment thread comfy_extras/nodes_frame_interpolation.py Outdated
@kijai kijai changed the title feat: RIFE frame interpolation model support feat: RIFE frame interpolation model support (CORE-29) Apr 2, 2026
@kijai kijai changed the title feat: RIFE frame interpolation model support (CORE-29) feat: RIFE and FILM frame interpolation model support (CORE-29) Apr 4, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 161-167: The multi_fn branch (where multi_fn is used to compute
mids in forward_multi_timestep()) currently asks for all timestep outputs at
once and bypasses the OOM-aware batch-halving fallback; update this branch to
catch CUDA OOM (or memory errors) from multi_fn/forward_multi_timestep and retry
by splitting t_values into smaller chunks (e.g., half the size repeatedly) and
calling multi_fn/forward_multi_timestep per-chunk, copying each chunk into
result as done now, using the same dtype/non_blocking logic and pbar/tqdm
updates; ensure you clear caches between retries (feat_cache or
torch.cuda.empty_cache()) and propagate the original error if even single-step
chunks fail.
- Around line 151-189: The inference loop currently calls inference_model and
inference_model.extract_features while only setting eval(), which still builds
autograd graphs and retains tensors in feat_cache; wrap the loop that iterates
over frame pairs (the try: for i in range(total_pairs): ... finally:) in
torch.inference_mode() so all calls to extract_features, inference_model(...)
and multi_fn(...) run without tracking gradients, preventing unnecessary graph
retention and reducing memory/OOM issues; update any context usage around
feat_cache, multi_fn, and inference_model to execute inside the inference_mode
block.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 43f05f12-4f18-425e-bfd3-3376c4458e9f

📥 Commits

Reviewing files that changed from the base of the PR and between a859152 and 257c531.

📒 Files selected for processing (3)
  • comfy_extras/frame_interpolation_models/film_net.py
  • comfy_extras/frame_interpolation_models/ifnet.py
  • comfy_extras/nodes_frame_interpolation.py

Comment thread comfy_extras/nodes_frame_interpolation.py
Comment thread comfy_extras/nodes_frame_interpolation.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 168-180: The OOM fallback in the multi_fn branch incorrectly uses
"continue", which skips writing outputs for the current pair (variables:
multi_fn, result, out_idx, num_interp) and moves to the next pair; instead,
catch model_management.OOM_EXCEPTION, call model_management.soft_empty_cache(),
set multi_fn = None, and do NOT continue so the loop falls through and retries
the same pair using the single-timestep path (i.e., let the subsequent
single-step interpolation logic run for the current i rather than skipping it).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e695b624-086d-4151-9510-1abfe96d97d3

📥 Commits

Reviewing files that changed from the base of the PR and between 257c531 and 3cbd1d5.

📒 Files selected for processing (2)
  • comfy_extras/frame_interpolation_models/film_net.py
  • comfy_extras/nodes_frame_interpolation.py

Comment thread comfy_extras/nodes_frame_interpolation.py Outdated
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
comfy_extras/nodes_frame_interpolation.py (1)

177-181: ⚠️ Potential issue | 🔴 Critical

OOM fallback skips writing the current pair's original frame.

When multi_fn raises OOM, the continue at line 181 skips the rest of the loop body including line 201-202 which writes images[i + 1] to the result. This causes frame loss—the interpolated frames for pair i are not written (expected, we want to retry), but the original frame images[i + 1] is also skipped (bug).

The intended behavior should fall through to process the current pair with single-timestep calls instead of skipping to the next pair.

Proposed fix
             if multi_fn is not None:
                 # Models with timestep-independent flow can compute it once for all timesteps
                 try:
                     mids = multi_fn(img0_single, img1_single, t_values, cache=feat_cache)
                     result[out_idx:out_idx + num_interp] = mids[:, :, :H, :W].to(out_dtype)
                     out_idx += num_interp
                     pbar.update(num_interp)
                     tqdm_bar.update(num_interp)
                 except model_management.OOM_EXCEPTION:
                     # Fall back to single-timestep calls for this and subsequent pairs
                     model_management.soft_empty_cache()
                     multi_fn = None
-                    continue
-            else:
+                    # Fall through to process current pair with single-timestep path
+
+            if multi_fn is None:
                 j = 0
                 while j < num_interp:
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@comfy_extras/nodes_frame_interpolation.py` around lines 177 - 181, The OOM
exception handler currently sets multi_fn = None then uses "continue", which
skips the remainder of the loop and prevents writing the original frame
(images[i + 1]); remove the "continue" so execution falls through and the loop
will process the current pair using the single-timestep path. Specifically, in
the except model_management.OOM_EXCEPTION block (where
model_management.soft_empty_cache() and multi_fn = None are set), delete the
"continue" and ensure subsequent logic checks multi_fn (now None) and executes
the single-timestep interpolation and writes images[i + 1] to the result.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@comfy_extras/nodes_frame_interpolation.py`:
- Around line 177-181: The OOM exception handler currently sets multi_fn = None
then uses "continue", which skips the remainder of the loop and prevents writing
the original frame (images[i + 1]); remove the "continue" so execution falls
through and the loop will process the current pair using the single-timestep
path. Specifically, in the except model_management.OOM_EXCEPTION block (where
model_management.soft_empty_cache() and multi_fn = None are set), delete the
"continue" and ensure subsequent logic checks multi_fn (now None) and executes
the single-timestep interpolation and writes images[i + 1] to the result.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e27f8a83-58a9-4a47-ab92-5dfb86d9f203

📥 Commits

Reviewing files that changed from the base of the PR and between 3cbd1d5 and 36a9a60.

📒 Files selected for processing (2)
  • comfy_extras/nodes_frame_interpolation.py
  • models/frame_interpolation/put_frame_interpolation_models_here

@Kosinkadink
Copy link
Copy Markdown
Member

Tested the code on a windows machine, seems good.

One change I'd request is to remove torch compile from the node. We can worry about that later in terms of compatibility; it's removal will simplify the merge of this PR.

@GogitaTS
Copy link
Copy Markdown

GogitaTS commented Apr 13, 2026 via email

@GogitaTS
Copy link
Copy Markdown

GogitaTS commented Apr 13, 2026 via email

rattus128
rattus128 previously approved these changes Apr 22, 2026
Kosinkadink
Kosinkadink previously approved these changes Apr 22, 2026
@Kosinkadink Kosinkadink self-requested a review April 22, 2026 05:13
@Kosinkadink
Copy link
Copy Markdown
Member

@kijai was about to merge, but realized something - do we have any other nodes in core that have a 'model' name for input that aren't a MODEL type? If not, maybe that input can be called interp_model instead. Comparison with LATENT_UPSCALE_MODEL:
image

@kijai kijai dismissed stale reviews from Kosinkadink and rattus128 via 764e835 April 22, 2026 09:21
@Kosinkadink Kosinkadink merged commit db85cf0 into Comfy-Org:master Apr 22, 2026
14 checks passed
GuangWei-create added a commit to GuangWei-create/ComfyUI that referenced this pull request Apr 22, 2026
feat: RIFE and FILM frame interpolation model support (CORE-29) (Comfy-Org#13258)
Kosinkadink added a commit that referenced this pull request Apr 24, 2026
* fix: pin SQLAlchemy>=2.0 in requirements.txt (fixes #13036) (#13316)

* Refactor io to IO in nodes_ace.py (#13485)

* Bump comfyui-frontend-package to 1.42.12 (#13489)

* Make the ltx audio vae more native. (#13486)

* feat(api-nodes): add automatic downscaling of videos for ByteDance 2 nodes (#13465)

* Support standalone LTXV audio VAEs (#13499)

* [Partner Nodes]  added 4K resolution for Veo models; added Veo 3 Lite model (#13330)

* feat(api nodes): added 4K resolution for Veo models; added Veo 3 Lite model

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* increase poll_interval from 5 to 9

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>
Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>

* Bump comfyui-frontend-package to 1.42.14 (#13493)

* Add gpt-image-2 as version option (#13501)

* Allow logging in comfy app files. (#13505)

* chore: update workflow templates to v0.9.59 (#13507)

* fix(veo): reject 4K resolution for veo-3.0 models in Veo3VideoGenerationNode (#13504)

The tooltip on the resolution input states that 4K is not available for
veo-3.1-lite or veo-3.0 models, but the execute guard only rejected the
lite combination. Selecting 4K with veo-3.0-generate-001 or
veo-3.0-fast-generate-001 would fall through and hit the upstream API
with an invalid request.

Broaden the guard to match the documented behavior and update the error
message accordingly.

Co-authored-by: Jedrzej Kosinski <kosinkadink1@gmail.com>

* feat: RIFE and FILM frame interpolation model support (CORE-29) (#13258)

* initial RIFE support

* Also support FILM

* Better RAM usage, reduce FILM VRAM peak

* Add model folder placeholder

* Fix oom fallback frame loss

* Remove torch.compile for now

* Rename model input

* Shorter input type name

---------

* fix: use Parameter assignment for Stable_Zero123 cc_projection weights (fixes #13492) (#13518)

On Windows with aimdo enabled, disable_weight_init.Linear uses lazy
initialization that sets weight and bias to None to avoid unnecessary
memory allocation. This caused a crash when copy_() was called on the
None weight attribute in Stable_Zero123.__init__.

Replace copy_() with direct torch.nn.Parameter assignment, which works
correctly on both Windows (aimdo enabled) and other platforms.

* Derive InterruptProcessingException from BaseException (#13523)

* bump manager version to 4.2.1 (#13516)

* ModelPatcherDynamic: force cast stray weights on comfy layers (#13487)

the mixed_precision ops can have input_scale parameters that are used
in tensor math but arent a weight or bias so dont get proper VRAM
management. Treat these as force-castable parameters like the non comfy
weight, random params are buffers already are.

* Update logging level for invalid version format (#13526)

* [Partner Nodes] add SD2 real human support (#13509)

* feat(api-nodes): add SD2 real human support

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* fix: add validation before uploading Assets

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* Add asset_id and group_id displaying on the node

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* extend poll_op to use instead of custom async cycle

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* added the polling for the "Active" status after asset creation

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* updated tooltip for group_id

* allow usage of real human in the ByteDance2FirstLastFrame node

* add reference count limits

* corrected price in status when input assets contain video

Signed-off-by: bigcat88 <bigcat88@icloud.com>

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* feat: SAM (segment anything) 3.1 support (CORE-34) (#13408)

* [Partner Nodes] GPTImage: fix price badges, add new resolutions (#13519)

* fix(api-nodes): fixed price badges, add new resolutions

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* proper calculate the total run cost when "n > 1"

Signed-off-by: bigcat88 <bigcat88@icloud.com>

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* chore: update workflow templates to v0.9.61 (#13533)

* chore: update embedded docs to v0.4.4 (#13535)

* add 4K resolution to Kling nodes (#13536)

Signed-off-by: bigcat88 <bigcat88@icloud.com>

* Fix LTXV Reference Audio node (#13531)

* comfy-aimdo 0.2.14: Hotfix async allocator estimations (#13534)

This was doing an over-estimate of VRAM used by the async allocator when lots
of little small tensors were in play.

Also change the versioning scheme to == so we can roll forward aimdo without
worrying about stable regressions downstream in comfyUI core.

* Disable sageattention for SAM3 (#13529)

Causes Nans

* execution: Add anti-cycle validation (#13169)

Currently if the graph contains a cycle, the just inifitiate recursions,
hits a catch all then throws a generic error against the output node
that seeded the validation. Instead, fail the offending cycling mode
chain and handlng it as an error in its own right.

Co-authored-by: guill <jacob.e.segal@gmail.com>

* chore: update workflow templates to v0.9.62 (#13539)

---------

Signed-off-by: bigcat88 <bigcat88@icloud.com>
Co-authored-by: Octopus <liyuan851277048@icloud.com>
Co-authored-by: comfyanonymous <121283862+comfyanonymous@users.noreply.github.com>
Co-authored-by: Comfy Org PR Bot <snomiao+comfy-pr@gmail.com>
Co-authored-by: Alexander Piskun <13381981+bigcat88@users.noreply.github.com>
Co-authored-by: Jukka Seppänen <40791699+kijai@users.noreply.github.com>
Co-authored-by: AustinMroz <austin@comfy.org>
Co-authored-by: Daxiong (Lin) <contact@comfyui-wiki.com>
Co-authored-by: Matt Miller <matt@miller-media.com>
Co-authored-by: blepping <157360029+blepping@users.noreply.github.com>
Co-authored-by: Dr.Lt.Data <128333288+ltdrdata@users.noreply.github.com>
Co-authored-by: rattus <46076784+rattus128@users.noreply.github.com>
Co-authored-by: guill <jacob.e.segal@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants