End-to-end subtitle translation workstation with cloud and local OpenVINO model support.
-
Updated
May 15, 2026 - TypeScript
End-to-end subtitle translation workstation with cloud and local OpenVINO model support.
A high-performance Docker container that runs OpenAI's Whisper model. Optimized for CPU, Intel NPU, Intel Arc/iGPU, and NVIDIA CUDA GPUs.
Professional benchmarking framework for Intel NPU, CPU, and GPU inference using OpenVINO
Complete guide and scripts for setup, execution and testing the AI models (ONNX) performance on NPU architectures (Qualcomm QNN & Intel OpenVINO) on Windows.
Local-first tactical command and control drone POV demo
LLM FOR OpenVINO 多模型管理伺服器 (for Intel NPU/GPU)
NPU-assisted real-time shader pipeline for Minecraft — Intel NPU generates budget fields, GPU renders smarter
Add a description, image, and links to the intel-npu topic page so that developers can more easily learn about it.
To associate your repository with the intel-npu topic, visit your repo's landing page and select "manage topics."