FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
-
Updated
Aug 28, 2023 - Python
FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation (ICRA 2021)
ComfyUI Depth Anything (v1/v2/distill-any-depth) Tensorrt Custom Node (up to 14x faster)
Анализ трафика на круговом движении с использованием компьютерного зрения
Yolov5 TensorRT Implementations
Base on tensorrt version 8.2.4, compare inference speed for different tensorrt api.
VitPose without MMCV dependencies
Production-ready YOLO8 Segmentation deployment with TensorRT and ONNX support for CPU/GPU, including AI model integration guidance for Unitlab Annotate.
you can use dbnet to detect word or bar code,Knowledge Distillation is provided,also python tensorrt inference is provided.
The real-time Instance Segmentation Algorithm SparseInst running on TensoRT and ONNX
Convert yolo models to ONNX, TensorRT add NMSBatched.
Sinapsis repo with templates for face detection, face recognition and face verification
Advance inference performance using TensorRT for CRAFT Text detection. Implemented modules to convert Pytorch -> ONNX -> TensorRT, with dynamic shapes (multi-size input) inference.
"Narrative Canvas" project is an edge computing project based on Nvidia Jetson. It can transform uploaded images into captivating stories and artworks.
An oriented object detection framework based on TensorRT
Convert ONNX models to TensorRT engines and run inference in containerized environments
Export (from Onnx) and Inference TensorRT engine with Python
ADAS 前车碰撞预警系统:True innovation does not come from closed barriers, but from sharing knowledge and lowering entry barriers so that more people can benefit.
Dolphin is a python toolkit meant to speed up inference of TensorRT by providing CUDA-Accelerated processing.
Real-time traffic analysis, a small yet tricky pet project
This is an mnist example of how to transfer a .pt file to .onnx, then transfer .onnx file to .trt file.
Add a description, image, and links to the tensorrt-inference topic page so that developers can more easily learn about it.
To associate your repository with the tensorrt-inference topic, visit your repo's landing page and select "manage topics."