Skip to content

Add ml_depth_pro example#7832

Merged
Wumpf merged 3 commits intorerun-io:mainfrom
oxkitsune:gijs/depth-pro-example
Oct 22, 2024
Merged

Add ml_depth_pro example#7832
Wumpf merged 3 commits intorerun-io:mainfrom
oxkitsune:gijs/depth-pro-example

Conversation

@oxkitsune
Copy link
Member

@oxkitsune oxkitsune commented Oct 18, 2024

What

ml_depth_pro_example.mp4

This adds an external example for visualizing DepthPro using the new video logging api

Checklist

  • I have read and agree to Contributor Guide and the Code of Conduct
  • I've included a screenshot or gif (if applicable)
  • The PR title and labels are set such as to maximize their usefulness for the next release's CHANGELOG

To run all checks from main, comment on the PR with @rerun-bot full-check.

@Wumpf Wumpf self-requested a review October 18, 2024 14:45
@Wumpf Wumpf added examples Issues relating to the Rerun examples include in changelog labels Oct 18, 2024
@Wumpf Wumpf added this to the Next patch release milestone Oct 18, 2024
Copy link
Member

@Wumpf Wumpf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice! Description lgtm, maybe a bit on the short side 🤷 Is there maybe a link to the paper that could be included as well?

I tried it with this old classic
image
For some reason it ended up only doing three frames hum.

@Wumpf
Copy link
Member

Wumpf commented Oct 18, 2024

looks like pixi run lint-rerun is unhappy. probably needs some NOLINT comments (there's now also BEGIN_NOLINT and END_NOLINT)

@oxkitsune
Copy link
Member Author

Sure, I will add some more details to the description.

As for the error, it might have logged a text log for the error, I suspect it's HF limits.

I run a forward pass for each frame separately which might eat the ZeroGPU budget up too fast

@oxkitsune
Copy link
Member Author

Getting this for each frame while running it:
image 😢

will look into batching it

@oxkitsune
Copy link
Member Author

Lints pass, and batched inference works when I tested it locally on a 4090. I am out of ZeroGPU quota so I can't test it on HF for now

@Wumpf Wumpf merged commit ef13c80 into rerun-io:main Oct 22, 2024
@Wumpf Wumpf changed the title Add ml_depth_pro example Add ml_depth_pro example Nov 11, 2024
@Wumpf Wumpf removed this from the Next patch release milestone Feb 6, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

examples Issues relating to the Rerun examples include in changelog

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants