Skip to content

Add HunyuanVideo-1.5 contrib model#130

Open
jimburtoft wants to merge 4 commits intoaws-neuron:mainfrom
jimburtoft:contrib/hunyuanvideo-1.5
Open

Add HunyuanVideo-1.5 contrib model#130
jimburtoft wants to merge 4 commits intoaws-neuron:mainfrom
jimburtoft:contrib/hunyuanvideo-1.5

Conversation

@jimburtoft
Copy link
Copy Markdown
Contributor

Note: The below template includes items meant for model contributions only. For other contributions such as bug fixes, features, etc., only fill out the relevant portions of the form.

Description

HunyuanVideo-1.5 text-to-video generation on AWS Trainium. 8.33B DiT transformer with TP=4 parallelism and NKI flash attention, producing 480x848 video at ~55s with Classifier-Free Guidance.

Multi-component pipeline: DiT backbone (Neuron TP=4), 3D Causal VAE (Neuron tiled decode), byT5 glyph encoder (Neuron traced), Qwen2.5-VL 7B LLM text encoder (CPU bf16).

Model Information

Model Name: HunyuanVideo-1.5 (tencent/HunyuanVideo-1.5)

Model Architecture: 54 double-stream DiT blocks, hidden_size=2048, 16 attention heads, head_dim=128, flow matching with Euler scheduler

Purpose: Text-to-video generation (480x848 resolution, 5 frames)

Checklist

Required Components

  • Accuracy Test (test/integration/test_model.py)

    • byT5 encoder + mapper: cosine similarity >= 0.999 vs CPU reference (measured: 1.000 / 0.9999)
    • VAE tile decode: finite output, correct shape (16x spatial upsampling)
    • E2E pipeline: produces 5 frames at 480x848 resolution
    • E2E performance: 50-step generation within 120s wall-clock
  • README.md with the following sections:

    • Usage Example: Full setup from scratch with environment variables
    • Compatibility Matrix: Validated on trn2.3xlarge with SDK 2.28
    • Example Checkpoints: Links to all 3 required HuggingFace models
    • Testing Instructions: Commands for component and E2E test groups
  • Source Code (src/)

    • 8 Python modules covering DiT TP wrapper, CPU preprocessor, VAE compilation, tiled VAE decode, byT5 tracing, negative embedding caching, E2E pipeline, and DiT compilation
    • Uses torch_neuronx.trace() + parallel_model_trace() for TP=4

Optional Components

  • Unit Tests (CPU or Neuron-based)

Folder Structure

Confirm your contribution follows this structure:

/contrib/models/HunyuanVideo-1.5/
  README.md
  /src
    __init__.py
    dit_tp_wrapper.py
    dit_wrapper.py
    e2e_pipeline.py
    compile_vae_neuron.py
    tiled_vae_decode.py
    trace_byt5.py
    cache_neg_embeddings.py
    recompile_dit_masked.py
  /test
    __init__.py
    /integration
      __init__.py
      test_model.py
  /samples
    /neuron
      frame_0000.png
      frame_0002.png
      frame_0004.png
  /examples
    generate_video.py

Testing

How did you test this change?

All tests executed on trn2.3xlarge (LNC=2, 4 logical NeuronCores) with pre-compiled models.

Component tests (byT5 + VAE) run in-process; E2E tests run as subprocess.
Tests must be run in two groups to avoid NeuronCore contention:

pytest test/integration/test_model.py -k "ByT5 or VAE" -v  # 2 passed, 54s
pytest test/integration/test_model.py -k "E2E" -v           # 2 passed, 150s

Test Results:

Test Result Key Metric
byT5 encoder accuracy PASSED cos_sim = 1.000001
byT5 mapper accuracy PASSED cos_sim = 0.999947
VAE tile decode PASSED [1,3,5,128,128], 177ms, all finite
E2E frame generation (2 steps) PASSED 5 frames at 480x848, 23.2s
E2E performance (50 steps) PASSED 327ms/step, 60.8s wall-clock

Compatibility

Tested with:

  • Neuron SDK Version(s): 2.28 (neuronx-cc 2.22, torch-neuronx 2.9.0)
  • Instance Type(s): trn2.3xlarge (LNC=2)
  • PyTorch Version: 2.9.0
  • Python Version: 3.12.3

Additional Information

5 code patches required for Neuron compatibility:

  1. Dtype mismatch in token_refiner (mask.float() -> mask.to(x.dtype))
  2. FlexAttention replaced with F.scaled_dot_product_attention
  3. NaN from SDPA mask expansion (key-only mask fix)
  4. byT5 output dtype cast to bf16
  5. RoPE rotate_half: interleaved-pair rotation (critical for visual quality)

Performance:

  • DiT per-step: 327ms (TP=4, NKI flash attention)
  • VAE tiled decode: 8.5s (45 tiles x 177ms)
  • byT5 encode: 4.4ms
  • E2E with CFG (50 steps): ~55s
  • E2E without CFG: ~25s

Known limitations:

  • LLM (Qwen2.5-VL 7B) runs on CPU (~14s) -- too large for single-core Neuron trace
  • VAE requires monkey-patches for replication_pad3d compiler issue
  • B=2 batched CFG is 4.2x slower than sequential (HBM bandwidth saturation)

Related Issues

N/A

vLLM Integration

  • This model/feature is intended for use with vLLM
  • Documentation includes vLLM registration instructions

By submitting this PR, I confirm that:

  • I have read and followed the contributing guidelines
  • This is a community contribution and may have limited testing compared to officially-supported models
  • The code follows best practices and is well-documented
  • All required components listed above are included

8.33B DiT text-to-video pipeline on trn2.3xlarge (TP=4, NKI flash attention).
55s E2E with CFG, 25s without. Photorealistic 480x848 output.

Components: DiT (328ms/step), VAE (tiled, 8.5s), byT5 (4.4ms), LLM (CPU).
5 code patches for Neuron compatibility documented in README.
- byT5 test: use actual HunyuanVideo loading code instead of nonexistent
  load_byt5_models(), fix mapper to single bf16 arg, add torch_neuronx import
- VAE test: use TiledVAEDecoderNeuron._decode_tile() with correct tile shape,
  add spatial dimension validation (16x upsampling)
- E2E tests: use subprocess to run e2e_pipeline.py (argparse-based, not importable)
- Document NeuronCore contention: component and E2E tests must run separately
- Update performance threshold to 120s wall-clock (includes model loading overhead)

Tested on trn2.3xlarge SDK 2.28:
  byT5 encoder cos_sim=1.000001, mapper cos_sim=0.999947
  VAE tile decode: [1,3,5,128,128], 177ms, all finite
  E2E 2-step: 5 frames at 480x848, 23.2s
  E2E 50-step: 327ms/step avg, 60.8s wall-clock
Include nki_rope.py with a contiguous-layout NKI kernel for fused
RoPE rotation. Benchmarked at ~3% speedup per DiT step (250ms vs
257ms). Disabled by default due to modest gain; enabled via env var
for users who want to experiment with NKI custom kernels.
STA enables block-sparse attention for sequences too long for dense O(n^2)
attention (e.g., 129-frame 480p at ~52K tokens). Tokens are tiled in 3D
(T,H,W) and each tile attends only to its spatio-temporal neighborhood
using the attention_cte NKI kernel.

Key design decisions:
- Auto-enabled via sta_config parameter (no env var needed)
- Scatter-free architecture: boundary clamping ensures uniform neighborhood
  sizes, eliminating global scatter that caused SBUF overflow at 64K+ tokens
- Per-chunk KV gather to avoid materializing full gather tensors
- Same model weights for both dense and STA modes (no learned parameters)

Files added:
- src/sta_attention.py: STAAttention module with pre-computed indices

Files modified:
- src/dit_tp_wrapper.py: Added sta_attention param to TPMMDoubleStreamBlock,
  sta_config param to HunyuanDiTTPWrapper, STA/dense dispatch in forward()
- README.md: STA documentation, benchmarks, compilation instructions
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant