Skip to content

Commit da1f4db

Browse files
Dong Zhouyou-n-g
authored andcommitted
update README
1 parent a7c41b6 commit da1f4db

File tree

5 files changed

+58
-44
lines changed

5 files changed

+58
-44
lines changed

examples/benchmarks/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,7 @@ The numbers shown below demonstrate the performance of the entire `workflow` of
2525
| TCTS (Xueqing Wu, et al.)| Alpha360 | 0.0485±0.00 | 0.3689±0.04| 0.0586±0.00 | 0.4669±0.02 | 0.0816±0.02 | 1.1572±0.30| -0.0689±0.02 |
2626
| Transformer (Ashish Vaswani, et al.)| Alpha360 | 0.0141±0.00 | 0.0917±0.02| 0.0331±0.00 | 0.2357±0.03 | -0.0259±0.03 | -0.3323±0.43| -0.1763±0.07 |
2727
| Localformer (Juyong Jiang, et al.)| Alpha360 | 0.0408±0.00 | 0.2988±0.03| 0.0538±0.00 | 0.4105±0.02 | 0.0275±0.03 | 0.3464±0.37| -0.1182±0.03 |
28+
| TRA (Hengxu Lin, et al.)| Alpha360 | 0.0500±0.00 | 0.3966±0.04 | 0.0594±0.00 | 0.4856±0.03 | 0.1000±0.02 | 1.3425±0.31 | -0.0845±0.02 |
2829

2930
## Alpha158 dataset
3031
| Model Name | Dataset | IC | ICIR | Rank IC | Rank ICIR | Annualized Return | Information Ratio | Max Drawdown |
@@ -43,6 +44,8 @@ The numbers shown below demonstrate the performance of the entire `workflow` of
4344
| TabNet (Sercan O. Arik, et al.)| Alpha158 | 0.0383±0.00 | 0.3414±0.00| 0.0388±0.00 | 0.3460±0.00 | 0.0226±0.00 | 0.2652±0.00| -0.1072±0.00 |
4445
| Transformer (Ashish Vaswani, et al.)| Alpha158 | 0.0274±0.00 | 0.2166±0.04| 0.0409±0.00 | 0.3342±0.04 | 0.0204±0.03 | 0.2888±0.40| -0.1216±0.04 |
4546
| Localformer (Juyong Jiang, et al.)| Alpha158 | 0.0355±0.00 | 0.2747±0.04| 0.0466±0.00 | 0.3762±0.03 | 0.0506±0.02 | 0.7447±0.34| -0.0875±0.02 |
47+
| TRA (Hengxu Lin, et al.)| Alpha158 (with selected 20 features) | 0.0440±0.00 | 0.3592±0.03 | 0.0500±0.00 | 0.4256±0.02 | 0.0747±0.03 | 1.1281±0.49 | -0.0813±0.03 |
48+
| TRA (Hengxu Lin, et al.)| Alpha158 | 0.0474±0.00 | 0.3653±0.03 | 0.0573±0.00 | 0.4494±0.02 | 0.0770±0.02 | 1.1342±0.38 | -0.0852±0.03 |
4649

4750
- The selected 20 features are based on the feature importance of a lightgbm-based model.
4851
- The base model of DoubleEnsemble is LGBM.

examples/benchmarks/TRA/README.md

Lines changed: 52 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -1,53 +1,77 @@
11
# Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport
22

3-
This code provides a PyTorch implementation for TRA (Temporal Routing Adaptor), as described in the paper [Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport](http://arxiv.org/abs/2106.12950).
3+
Temporal Routing Adaptor (TRA) is designed to capture multiple trading patterns in the stock market data. Please refer to [our paper](http://arxiv.org/abs/2106.12950) for more details.
44

5-
* TRA (Temporal Routing Adaptor) is a lightweight module that consists of a set of independent predictors for learning multiple patterns as well as a router to dispatch samples to different predictors.
6-
* We also design a learning algorithm based on Optimal Transport (OT) to obtain the optimal sample to predictor assignment and effectively optimize the router with such assignment through an auxiliary loss term.
5+
If you find our work useful in your research, please cite:
6+
```
7+
@inproceedings{HengxuKDD2021,
8+
author = {Hengxu Lin and Dong Zhou and Weiqing Liu and Jiang Bian},
9+
title = {Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport},
10+
booktitle = {Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery \& Data Mining},
11+
series = {KDD '21},
12+
year = {2021},
13+
publisher = {ACM},
14+
}
15+
16+
@article{yang2020qlib,
17+
title={Qlib: An AI-oriented Quantitative Investment Platform},
18+
author={Yang, Xiao and Liu, Weiqing and Zhou, Dong and Bian, Jiang and Liu, Tie-Yan},
19+
journal={arXiv preprint arXiv:2009.11189},
20+
year={2020}
21+
}
22+
```
23+
24+
## Usage (Recommended)
25+
26+
**Update**: `TRA` has been moved to `qlib.contrib.model.pytorch_tra` to support other `Qlib` components like `qlib.workflow` and `Alpha158/Alpha360` dataset.
27+
28+
Please follow the official [doc](https://qlib.readthedocs.io/en/latest/component/workflow.html) to use `TRA` with `workflow`. Here we also provide several example config files:
29+
30+
- `workflow_config_tra_Alpha360.yaml`: running `TRA` with `Alpha360` dataset
31+
- `workflow_config_tra_Alpha158.yaml`: running `TRA` with `Alpha158` dataset (with feature subsampling)
32+
- `workflow_config_tra_Alpha158_full.yaml`: running `TRA` with `Alpha158` dataset (without feature subsampling)
733

34+
The performances of `TRA` are reported in [Benchmarks](https://github.com/microsoft/qlib/tree/main/examples/benchmarks).
835

9-
# Running TRA
36+
## Usage (Not Maintained)
1037

11-
## Requirements
12-
- Install `Qlib` main branch
38+
This section is used to reproduce the results in the paper.
1339

14-
## Running
40+
### Running
1541

1642
We attach our running scripts for the paper in `run.sh`.
1743

1844
And here are two ways to run the model:
1945

2046
* Running from scripts with default parameters
21-
You can directly run from Qlib command `qrun`:
22-
```
23-
qrun configs/config_alstm.yaml
24-
```
47+
48+
You can directly run from Qlib command `qrun`:
49+
```
50+
qrun configs/config_alstm.yaml
51+
```
2552

2653
* Running from code with self-defined parameters
27-
Setting different parameters is also allowed. See codes in `example.py`:
28-
```
29-
python example.py --config_file configs/config_alstm.yaml
30-
```
3154

32-
Here we trained TRA on a pretrained backbone model. Therefore we run `*_init.yaml` before TRA's scipts.
55+
Setting different parameters is also allowed. See codes in `example.py`:
56+
```
57+
python example.py --config_file configs/config_alstm.yaml
58+
```
3359

34-
# Results
60+
Here we trained TRA on a pretrained backbone model. Therefore we run `*_init.yaml` before TRA's scipts.
3561

36-
## Outputs
62+
### Results
3763

3864
After running the scripts, you can find result files in path `./output`:
3965

40-
`info.json` - config settings and result metrics.
41-
42-
`log.csv` - running logs.
66+
* `info.json` - config settings and result metrics.
67+
* `log.csv` - running logs.
68+
* `model.bin` - the model parameter dictionary.
69+
* `pred.pkl` - the prediction scores and output for inference.
4370

44-
`model.bin` - the model parameter dictionary.
71+
Evaluation metrics reported in the paper:
4572

46-
`pred.pkl` - the prediction scores and output for inference.
47-
48-
## Our Results
4973
| Methods | MSE| MAE| IC | ICIR | AR | AV | SR | MDD |
50-
|-------------------|-------------------|---------------------|--------------------|--------------------|--------------------|--------------------|--------------------|--------------------|
74+
|-------|-------|------|-----|-----|-----|-----|-----|-----|
5175
|Linear|0.163|0.327|0.020|0.132|-3.2%|16.8%|-0.191|32.1%|
5276
|LightGBM|0.160(0.000)|0.323(0.000)|0.041|0.292|7.8%|15.5%|0.503|25.7%|
5377
|MLP|0.160(0.002)|0.323(0.003)|0.037|0.273|3.7%|15.3%|0.264|26.2%|
@@ -61,21 +85,8 @@ After running the scripts, you can find result files in path `./output`:
6185

6286
A more detailed demo for our experiment results in the paper can be found in `Report.ipynb`.
6387

64-
# Common Issues
88+
## Common Issues
6589

6690
For help or issues using TRA, please submit a GitHub issue.
6791

68-
Sometimes we might encounter situation where the loss is `NaN`, please check the `epsilon` parameter in the sinkhorn algorithm, adjusting the `epsilon` according to input's scale is important.
69-
70-
# Citation
71-
If you find this repository useful in your research, please cite:
72-
```
73-
@inproceedings{HengxuKDD2021,
74-
author = {Hengxu Lin and Dong Zhou and Weiqing Liu and Jiang Bian},
75-
title = {Learning Multiple Stock Trading Patterns with Temporal Routing Adaptor and Optimal Transport},
76-
booktitle = {Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery \& Data Mining},
77-
series = {KDD '21},
78-
year = {2021},
79-
publisher = {ACM},
80-
}
81-
```
92+
Sometimes we might encounter situation where the loss is `NaN`, please check the `epsilon` parameter in the sinkhorn algorithm, adjusting the `epsilon` according to input's scale is important.

examples/benchmarks/TRA/workflow_config_tra_Alpha158.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ task:
8080
early_stop: 10
8181
smooth_steps: 5
8282
seed: 0
83-
logdir: output/Alpha158/router
83+
logdir:
8484
lamb: 1.0
8585
rho: 1.0
8686
transport_method: router

examples/benchmarks/TRA/workflow_config_tra_Alpha158_full.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ task:
7474
early_stop: 10
7575
smooth_steps: 5
7676
seed: 0
77-
logdir: output/Alpha158_full/router
77+
logdir:
7878
lamb: 1.0
7979
rho: 1.0
8080
transport_method: router

examples/benchmarks/TRA/workflow_config_tra_Alpha360.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -73,7 +73,7 @@ task:
7373
max_steps_per_epoch: 100
7474
early_stop: 10
7575
smooth_steps: 5
76-
logdir: output/Alpha360/router
76+
logdir:
7777
seed: 0
7878
lamb: 1.0
7979
rho: 1.0

0 commit comments

Comments
 (0)