Skip to content
#

video-pipeline

Here are 19 public repositories matching this topic...

Perception receipts for AI video pipelines. Cross-writer bit-exact under default settings (SHA-256 stable across writers in any language). Zero runtime dependencies; pure stdlib core. ~1.1 KB per video; per-frame CRC32 + schema + versioning. Useful now, improving continuously.

  • Updated May 14, 2026
  • Python

面向 OpenClaw 的课程/会议视频采集、Whisper 转写、关键帧/OCR、云端处理、飞书回传与笔记校验流水线。 / Window capture, Whisper transcription, keyframe/OCR extraction, cloud processing, Feishu delivery, and validated note generation pipeline for OpenClaw-powered course and meeting videos.

  • Updated Mar 7, 2026
  • Python

Multimodal video annotation pipeline — local GPU end-to-end (audio, vision, OCR, faces, brands, chat, music). Turns any long-form video with people into a time-aligned event corpus for synthetic-data construction, training-set curation, and analysis.

  • Updated May 6, 2026
  • Python

Improve this page

Add a description, image, and links to the video-pipeline topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the video-pipeline topic, visit your repo's landing page and select "manage topics."

Learn more