[diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer#18253
Conversation
…id the overhead of tensor transfer
Summary of ChangesHello @nono-Sang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request implements a significant optimization for multimodal generation by allowing video and audio outputs to be saved directly on the server. This change introduces a mechanism to return only the file paths to the client, thereby eliminating the need to transfer potentially large tensor data across the network. The modification enhances the system's efficiency and responsiveness, particularly for applications involving high-volume video generation. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a valuable optimization by allowing videos to be saved directly on the server, which avoids the performance overhead of transferring large tensors. The changes are implemented across several files, including sampling parameters, the diffusion generator, and the GPU worker, and appear to correctly implement the new return_file_paths_only feature. My review includes two suggestions to address code duplication, which would improve the overall maintainability of the code.
| if req.save_output and req.return_file_paths_only: | ||
| for output_idx, output_path in enumerate( | ||
| output_batch.output_file_paths | ||
| ): | ||
| result_item: dict[str, Any] = { | ||
| "samples": None, | ||
| "frames": None, | ||
| "audio": None, | ||
| "prompts": req.prompt, | ||
| "size": (req.height, req.width, req.num_frames), | ||
| "generation_time": timer.duration, | ||
| "peak_memory_mb": output_batch.peak_memory_mb, | ||
| "timings": ( | ||
| output_batch.timings.to_dict() | ||
| if output_batch.timings | ||
| else {} | ||
| ), | ||
| "trajectory": output_batch.trajectory_latents, | ||
| "trajectory_timesteps": output_batch.trajectory_timesteps, | ||
| "trajectory_decoded": output_batch.trajectory_decoded, | ||
| "prompt_index": output_idx, | ||
| "output_file_path": output_path, | ||
| } | ||
| results.append(result_item) | ||
| continue |
There was a problem hiding this comment.
There is significant duplication in how result_item is constructed in this new block and in the existing code that handles the case where return_file_paths_only is false (located at lines 301-318 in the full file). Many fields are identical across both dictionary creations.
To make the code more maintainable and adhere to the DRY (Don't Repeat Yourself) principle, I recommend refactoring this. You could define a base dictionary containing all the common fields before this if statement, and then use dictionary unpacking to add the specific fields for each case.
For example:
# Create a base dictionary with common fields
base_result_item = {
"prompts": req.prompt,
"size": (req.height, req.width, req.num_frames),
"generation_time": timer.duration,
# ... other common fields
}
if req.save_output and req.return_file_paths_only:
for output_idx, output_path in enumerate(output_batch.output_file_paths):
result_item = {
**base_result_item,
"samples": None,
"frames": None,
"audio": None,
"prompt_index": output_idx,
"output_file_path": output_path,
}
results.append(result_item)
continue
# Later in the function for the other case:
for output_idx, sample in enumerate(output_batch.output):
# ...
result_item = {
**base_result_item,
"samples": sample,
"frames": frames,
"audio": audio,
"prompt_index": output_idx,
}
results.append(result_item)| # Save output to file and return file path only if requested. Avoid the serialization | ||
| # and deserialization overhead between scheduler_client and GPU worker. | ||
| if req.save_output and req.return_file_paths_only: | ||
| output_paths: list[str] = [] | ||
| # Copied from DiffGenerator.generate() | ||
| audio_sample_rate = output_batch.audio_sample_rate | ||
| for output_idx, sample in enumerate(output_batch.output): | ||
| num_outputs = len(output_batch.output) | ||
| audio = output_batch.audio | ||
| if req.data_type == DataType.VIDEO: | ||
| if isinstance(audio, torch.Tensor) and audio.ndim >= 2: | ||
| audio = ( | ||
| audio[output_idx] | ||
| if audio.shape[0] > output_idx | ||
| else None | ||
| ) | ||
| elif isinstance(audio, np.ndarray) and audio.ndim >= 2: | ||
| audio = ( | ||
| audio[output_idx] | ||
| if audio.shape[0] > output_idx | ||
| else None | ||
| ) | ||
| if audio is not None and not ( | ||
| isinstance(sample, (tuple, list)) and len(sample) == 2 | ||
| ): | ||
| sample = (sample, audio) | ||
| save_file_path = req.output_file_path(num_outputs, output_idx) | ||
| post_process_sample( | ||
| sample, | ||
| fps=req.fps, | ||
| save_output=True, | ||
| save_file_path=save_file_path, | ||
| data_type=req.data_type, | ||
| audio_sample_rate=audio_sample_rate, | ||
| ) | ||
| output_paths.append(save_file_path) | ||
| output_batch.output_file_paths = output_paths | ||
| output_batch.output = None |
There was a problem hiding this comment.
The logic for preparing video samples by pairing them with audio (lines 199-216) is duplicated from other parts of the codebase, as acknowledged by the comment # Copied from DiffGenerator.generate(). This same logic can also be found in diffusion_generator.py and runtime/entrypoints/openai/utils.py.
To improve code maintainability and avoid future inconsistencies, this duplicated logic should be extracted into a single utility function. A good location for this function would be in python/sglang/multimodal_gen/runtime/entrypoints/utils.py, alongside post_process_sample.
For example, you could create a helper function:
def pair_video_sample_with_audio(sample, audio_batch, output_idx):
"""Pairs a video sample with its corresponding audio from a batch if available."""
audio_for_sample = audio_batch
if isinstance(audio_batch, (torch.Tensor, np.ndarray)) and audio_batch.ndim >= 2:
audio_for_sample = audio_batch[output_idx] if audio_batch.shape[0] > output_idx else None
if audio_for_sample is not None and not (isinstance(sample, (tuple, list)) and len(sample) == 2):
return (sample, audio_for_sample)
return sampleUsing this helper would simplify the code here and in the other locations where this logic is repeated.
python/sglang/multimodal_gen/runtime/entrypoints/openai/utils.py
Outdated
Show resolved
Hide resolved
|
/tag-and-rerun-ci |
…id the overhead of tensor transfer (sgl-project#18253)
…id the overhead of tensor transfer (sgl-project#18253)
* www/pr/ks: (265 commits) [BugFix][PD]Fix metadata_buffer_index leak when aborted in PD (sgl-project#17483) Refactoring Mooncake TE as a shared distributed component (sgl-project#17810) [ModelOPT] Support Qwen 3 Next Coder NVFP4 (sgl-project#18224) Update author information in pyproject.toml (sgl-project#18453) [Kimi-K2.5] Fix missing `quant_config` in `KimiK25` (sgl-project#18440) Add tensor parallelism support to LFM2 ShortConv layers (sgl-project#17777) [diffusion] chore: revise process title (sgl-project#18446) Fix TRT-LLM MLA backend applying k_scale to BF16 KV cache in BMM1 (sgl-project#18396) [diffusion] refactor: group component loaders under the component_loaders/ directory (sgl-project#18438) [ModelOpt] Fix broken Qwen3-235B-A22B-Instruct-2507-NVFP4 launch (sgl-project#18189) [diffusion] feat: support efficient sequence shard (sgl-project#18161) [CI] fix: notebook ci may not working (sgl-project#18417) fix: sync server_args.kv_cache_dtype when detecting FP8 KV cache (sgl-project#18394) [Fix] Fix backend selection after flashinfer version update (sgl-project#18364) [diffusion] platform: support WAN/FLUX/Qwen-Image/Qwen-Image-edit on Ascend (sgl-project#13662) fix: fix NVFP4 Kimi-K2.5 weight mapping and exclude list (sgl-project#18370) [diffusion] feat: support saving videos directly on the server to avoid the overhead of tensor transfer (sgl-project#18253) [diffusion] fix: respect dist_timeout option (sgl-project#18386) [Doc] Fix outdated `--fp4-gemm-backend` documentation (sgl-project#18350) [diffusion] fix: remove unnecessary norm_type argument from GLM-Image dits (sgl-project#18382) ...
Motivation
The
scheduler_clientconnects with thegpu_workervia ZMQ. The workflow is as follows:sglang generate: diffusion_generate --(sync_scheduler_client.forward)--> gpu_workersglang serve: http_server --(async_scheduler_client.forward)--> gpu_workerOld method: The
scheduler_clientandgpu_workerexchanged output tensors, which introduced serialization and deserialization overhead.New method: The
gpu_workerdirectly processes and saves the output tensor as video, then returns the file path to thescheduler_client.Modifications
Main code changes:
multimodal_gen/runtime/entrypoints/utils.py. The saving logic in the following sections now uses this common function:multimodal_gen/runtime/entrypoints/openai/utils.pymultimodal_gen/runtime/managers/gpu_worker.pymultimodal_gen/runtime/entrypoints/http_server.pymultimodal_gen/runtime/entrypoints/diffusion_generator.pyreturn_file_paths_onlyis set toTrue.Accuracy Tests
Benchmarking and Profiling
Checklist
Review Process
/tag-run-ci-label,/rerun-failed-ci,/tag-and-rerun-ci