Skip to content

Commit c5bb203

Browse files
authored
Merge pull request #7 from lisadunlap/rebuttal
Fixing Documentation
2 parents 1110208 + 756be67 commit c5bb203

File tree

1 file changed

+34
-26
lines changed

1 file changed

+34
-26
lines changed

README.md

Lines changed: 34 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,10 @@
11
# LADS
2-
Official Implementation of LADS (Latent Augmentation using Domain descriptionS)
2+
Official Implementation of [LADS (Latent Augmentation using Domain descriptionS)](https://lisadunlap.github.io/LADS-website)
33

44
![LADS method overview.](figs/lads-method-2-1.png "LADS method overview")
55

6+
*WARNING: this is still WIP, please raise an issue if you run into any bugs.*
7+
68
## Getting started
79

810
1. Install the dependencies for our code using Conda. You may need to adjust the environment YAML file depending on your setup.
@@ -16,13 +18,6 @@ Official Implementation of LADS (Latent Augmentation using Domain descriptionS)
1618
1719
4. Run one of the config files and be amazed (or midly impressed) by what LADS can do
1820
19-
## Checkpoints
20-
The main results and checkpoints of LADS and other baselines can be accessed on wandb.
21-
* Waterbirds: https://wandb.ai/clipinvariance/LADS_Waterbirds_Replication
22-
* ColoredMNIST: https://wandb.ai/clipinvariance/LADS_ColoredMNIST_Replication
23-
* CUB: https://wandb.ai/clipinvariance/LADS_CUBPainting_Replication
24-
* miniDomainNet: https://wandb.ai/clipinvariance/LADS_miniDomainNet_Replication
25-
2621
## Code Structure
2722
The configurations for each method are in the `configs` folder. To try say the baseline of doing normal LR on the CLIP embeddings:
2823
```
@@ -44,20 +39,14 @@ Then, add the path to the saved embeddings to DATASET_PATHS in [data_helpers](./
4439
4540
More description of each method and the config files in the config folder.
4641
47-
## Some important parameters
48-
**EXP.TEXT_PROMPTS**
49-
50-
This is the domains/biases that you want to be invariant to. You can either have them be class specific (e.g. `["a painting of a {}.", "clipart of a {}."]`) or generic (e.g. `[["painting"], ["clipart"]]`). The default is class specific so if you want to use generic prompts instead set `AUGMENTATION.GENERIC=True`. For generic prompts, if you want to average the text embeddings of several phrases of a domain, simply add them to the list (e.g. `[["painting", "a photo of a painting", "an image of a painting"], ["clipart", "clipart of an object"]]`).
51-
52-
**EXP.NEUTRAL_PROMPTS**
53-
54-
If you want to take the difference in text embeddings (for things like the directional loss, most of the augmentations, and the embedding debiasing methods). you can set a neutral prompt (e.g. `["a sketch of a {}."]` or `[["a photo of a sketch]]`). Like TEXT_PROMPTS you can have it be class specific or generic, but if TEXT_PROMPTS is class specific so is NEUTRAL_PROMPTS and vice versa.
55-
56-
**EXP.ADVICE_METHOD**
42+
## Running LADS
43+
In LADS we train an augmentation network, augment the training data, then train a linear probe with the original and augmented data. Thus we use the same ADVICE_METHOD class and change the `EXP.AUGMENTATION` parameter to `LADS`.
5744
58-
This sets the type of linear probing you are doing. Set to `LR` if you want to use the scikit learn LR (what is in the CLIP repo) or `ClipMLP` for pytorch MLP (if `METHOD.MODEL.NUM_LAYERS=1` this is LR). Typically `CLIPMLP` runs a lot faster than `LR`.
45+
To make sure everything is working, run:
46+
`python main.py --config configs/CUB/lads.yaml`
47+
and check your results with https://wandb.ai/clipinvariance/LADS_CUBPainting_Replication/runs/ok37oz5h.
5948
60-
You can also set the advice method to one of the debiasing methods (different from augmentations in that we augment the training data and dont add in the original training data), but we don't use them anymore and I'm too lazy to explain it so if you care to try them out check the configs file (WARNING these are old so high chance of bugs).
49+
For the bias datasets, the augmentation class is called `BiasLADS`, and you can run the `lads.yaml` configs as well :)
6150
6251
## Running CLIP Zero-Shot
6352
In order to run the CLIP zero-shot baseline, set `EXP.ADVICE_METHOD=CLIPZS` and run the `clip_zs.py` file instead of `main.py` file.
@@ -84,11 +73,30 @@ python main.py --config configs/ColoredMNIST/mlp.yaml
8473
8574
**LR Initialized with the CLIP ZS Language Weights** For a small bump in OOD performance, you can run the `mlpzs.yaml` config to initalize the linear layer with the text embeddings of the classes. The prompts used are dictated by `EXP.TEMPLATES`, similar to running zero-shot.
8675
87-
## Running LADS
88-
In LADS we train an augmentation network, augment the training data, then train a linear probe with the original and augmented data. Thus we use the same ADVICE_METHOD class and change the `EXP.AUGMENTATION` parameter to `LADS`.
76+
## Some important parameters
77+
<details><summary>EXP.TEXT_PROMPTS</summary>
8978
90-
To make sure everything is working, run:
91-
`python main.py --config configs/CUB/lads.yaml`
92-
and check your results with https://wandb.ai/clipinvariance/LADS_CUBPainting_Replication/runs/ok37oz5h.
79+
This is the domains/biases that you want to be invariant to. You can either have them be class specific (e.g. `["a painting of a {}.", "clipart of a {}."]`) or generic (e.g. `[["painting"], ["clipart"]]`). The default is class specific so if you want to use generic prompts instead set `AUGMENTATION.GENERIC=True`. For generic prompts, if you want to average the text embeddings of several phrases of a domain, simply add them to the list (e.g. `[["painting", "a photo of a painting", "an image of a painting"], ["clipart", "clipart of an object"]]`).
80+
</details>
9381
94-
For the bias datasets, the augmentation class is called `BiasLADS`, and you can run the `lads.yaml` configs as well :)
82+
83+
<details><summary>EXP.NEUTRAL_PROMPTS</summary>
84+
85+
If you want to take the difference in text embeddings (for things like the directional loss, most of the augmentations, and the embedding debiasing methods). you can set a neutral prompt (e.g. `["a sketch of a {}."]` or `[["a photo of a sketch]]`). Like TEXT_PROMPTS you can have it be class specific or generic, but if TEXT_PROMPTS is class specific so is NEUTRAL_PROMPTS and vice versa.
86+
</details>
87+
88+
89+
<details><summary>EXP.ADVICE_METHOD</summary>
90+
91+
This sets the type of linear probing you are doing. Set to `LR` if you want to use the scikit learn LR (what is in the CLIP repo) or `ClipMLP` for pytorch MLP (if `METHOD.MODEL.NUM_LAYERS=1` this is LR). Typically `CLIPMLP` runs a lot faster than `LR`.
92+
93+
You can also set the advice method to one of the debiasing methods (different from augmentations in that we augment the training data and dont add in the original training data), but we don't use them anymore and I'm too lazy to explain it so if you care to try them out check the configs file (WARNING these are old so high chance of bugs).
94+
</details>
95+
96+
97+
## Checkpoints
98+
The main results and checkpoints of LADS and other baselines can be accessed on wandb.
99+
* Waterbirds: https://wandb.ai/clipinvariance/LADS_Waterbirds_Replication
100+
* ColoredMNIST: https://wandb.ai/clipinvariance/LADS_ColoredMNIST_Replication
101+
* CUB: https://wandb.ai/clipinvariance/LADS_CUBPainting_Replication
102+
* miniDomainNet: https://wandb.ai/clipinvariance/LADS_miniDomainNet_Replication

0 commit comments

Comments
 (0)