You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+35-24Lines changed: 35 additions & 24 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
1
+
# CoDeF: Content Deformation Fields for Temporally Consistent Video Processing
2
2
3
3
<imgsrc='docs/teaser.gif'></img>
4
4
@@ -10,38 +10,43 @@
10
10
<!-- Abstract: *This work presents the content deformation field **CoDeF** as a new type of video representation, which consists of a canonical content field aggregating the static contents in the entire video and a temporal deformation field recording the transformations from the canonical image (i.e., rendered from the canonical content field) to each individual frame along the time axis. Given a target video, these two fields are jointly optimized to reconstruct it through a carefully tailored rendering pipeline. We also introduce some decent regularizations into the optimization process, urging the canonical content field to inherit semantics (e.g., the object shape) from the video. With such a design, **CoDeF** naturally supports lifting image algorithms to videos, in the sense that one can apply an image algorithm to the canonical image and effortlessly propagate the outcomes to the entire video with the aid of the temporal deformation field. We experimentally show that **CoDeF** is able to lift image-to-image translation to video-to-video translation and lift keypoint detection to keypoint tracking without any training. More importantly, thanks to our lifting strategy that deploys the algorithms on only one image, we achieve superior cross-frame consistency in translated videos compared to existing video-to-video translation approaches, and even manage to track non-rigid objects like water and smog.* -->
* 1 Nvidia GPU (RTX A6000 48GB) with CUDA version 11.7
19
-
(Other GPUs are also suitable, and 10GB of GPU memory is sufficient to run our code.)
20
+
* 1 NVIDIA GPU (RTX A6000) with CUDA version 11.7. (Other GPUs are also suitable, and 10GB GPU memory is sufficient to run our code.)
20
21
21
-
To use video visualizer, please install `ffmpeg`by:
22
+
To use video visualizer, please install `ffmpeg`via
22
23
23
-
```
24
+
```shell
24
25
sudo apt-get install ffmpeg
25
26
```
26
27
27
-
For additional Python libraries, please install by:
28
+
For additional Python libraries, please install with
28
29
29
-
```
30
+
```shell
30
31
pip install -r requirements.txt
31
32
```
32
33
33
34
Our code also depends on [tiny-cuda-nn](https://github.com/NVlabs/tiny-cuda-nn).
34
-
See [this](https://github.com/NVlabs/tiny-cuda-nn#pytorch-extension)
35
+
See [this repository](https://github.com/NVlabs/tiny-cuda-nn#pytorch-extension)
35
36
for Pytorch extension install instructions.
36
37
37
-
38
38
## Data
39
-
### Our data
40
-
Download our data from [this URL](https://drive.google.com/file/d/1cKZF6ILeokCjsSAGBmummcQh0uRGaC_F/view?usp=sharing), unzip the file and put it in the current directory. Some additional data can be downloaded from [here](https://rec.ustc.edu.cn/share/5d1e0bb0-31d7-11ee-aa60-d1fd6c62dfb4).
39
+
40
+
### Provided data
41
+
42
+
We have provided some videos [here](https://drive.google.com/file/d/1cKZF6ILeokCjsSAGBmummcQh0uRGaC_F/view?usp=sharing) for quick test. Please download and unzip the data and put them in the root directory. More videos can be downloaded [here](https://rec.ustc.edu.cn/share/5d1e0bb0-31d7-11ee-aa60-d1fd6c62dfb4).
43
+
41
44
### Customize your own data
42
-
*To be released.*
43
45
44
-
And organize files as follows:
46
+
*Stay tuned for dtat preparation scripts.*
47
+
48
+
Please organize your own data as follows:
49
+
45
50
```
46
51
CoDeF
47
52
│
@@ -63,8 +68,11 @@ CoDeF
63
68
│
64
69
└─── ...
65
70
```
71
+
66
72
## Pretrained checkpoints
67
-
You can download the pre-trained checkpoints trained with the current codebase as follows:
73
+
74
+
You can download checkpoints pre-trained on the provided videos via
Please check configuration files in ``configs/``, and you can always add your own model config.
112
123
113
124
## Test reconstruction <aid="anchor"></a>
114
-
```bash
125
+
126
+
```shell
115
127
./scripts/test_multi.sh
116
128
```
117
129
After running the script, the reconstructed videos can be found in `results/all_sequences/{NAME}/{EXP_NAME}`, along with the canonical image.
118
130
119
131
## Test video translation
132
+
120
133
After obtaining the canonical image through [this step](#anchor), use your preferred text prompts to transfer it using [ControlNet](https://github.com/lllyasviel/ControlNet).
121
134
Once you have the transferred canonical image, place it in `all_sequences/${NAME}/${EXP_NAME}_control` (i.e. `CANONICAL_DIR` in `scripts/test_canonical.sh`).
122
135
123
-
Then run:
124
-
```bash
136
+
Then run
137
+
138
+
```shell
125
139
./scripts/test_canonical.sh
126
140
```
141
+
127
142
The transferred results can be seen in `results/all_sequences/{NAME}/{EXP_NAME}_transformed`.
128
143
129
144
*Note*: The `canonical_wh` option in the configuration file should be set with caution, usually a little larger than `img_wh`, as it determines the field of view of the canonical image.
@@ -138,7 +153,3 @@ The transferred results can be seen in `results/all_sequences/{NAME}/{EXP_NAME}_
0 commit comments