Skip to content

Commit fa29c8f

Browse files
authored
Added a quick start guide
1 parent 1b1abf0 commit fa29c8f

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

README.md

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,17 @@ Configuration variables are grouped in the `config.py` file. This the one you **
5050

5151
Those scripts are available either using Python or Jupyter. Two inference scripts are available but only for Python (though they can easily be imported in Jupyter).
5252

53+
### How to start
54+
55+
This an example of how to start using the ISPRS Vaihingen dataset. This dataset contains orthoimages (`top/` folder) and ground truthes (`gts_for_participants/`) folder.
56+
57+
1. First, we need to edit the `config.py` file. `BASE_DIR`, `DATASET` and `DATASET_DIR` are used to point to the folder where the dataset is stored and to specify a unique name for the dataset, e.g. "Vaihingen". `label_values` and `palette` define the classes that will be used and the associated colors in RGB format. `folders`, `train_ids` and `test_ids` define the folder arrangement of the dataset and the train/test split using unique numerical ids associated to the tiles.
58+
2. We need to transform the ground truth RGB-encoded images to 2D matrices. We can use the `convert_gt.py` script to do so, e.g. : `python convert_gt.py gts_for_participants/*.tif --from-color --out gts_numpy/`. This will populate a new `gts_numpy/` folder containing the matrices. Please note that the `folders` value for the labels should point to this folder (`gts_numpy/`).
59+
3. Extract small patches from the tiles to create the train and test sets : `python extract_images.py`
60+
4. Populate LMDB using the extracted images : `python create_lmdb.py`
61+
5. Train the network for 40 000 iterations, starting with VGG-16 weights and save the weights into the `trained_network_weights` folder : `python training.py --niter 40000 --update 1000 --init vgg16weights.caffemodel --snapshot trained_network_weights/`
62+
6. Test the trained network on some tiles : `python inference.py 16 32 --weights trained_network_weights/net_iter_40000.caffemodel`
63+
5364
## Requirements
5465

5566
You need to compile and use our Caffe fork (including [Alex Kendall's Unpooling Layer](https://github.com/alexgkendall/caffe-segnet)) to use the provided models. Training on GPU is recommended but not mandatory. You can download the fork by cloning this repository and executing :

0 commit comments

Comments
 (0)