Skip to content

Huge Activation Values in EfficientViT-SAM-L2 Checkpoint #184

@bobqianic

Description

@bobqianic

L0:
Image

L1:
Image

L2:
Image

I downloaded the checkpoint from Hugging Face
The L2 model is extremely unstable when running inference under FP16 AMP: the model outputs NaNs. This does not happen with L0/L1. I inspected the activations and found extremely large values.

I also inspected XL0 and XL1. They are even worse, extremely large activation values appear throughout the model, not just in LiteMLA.

Input image
image_8.tif

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions