Update benchmark performance#97
Conversation
examples/benchmarks/README.md
Outdated
| | LSTM | Alpha158 (with selected 20 features) | 0.0337±0.01 | 0.2562±0.05 | 0.0427±0.01 | 0.3392±0.04 | 0.0269±0.06 | 0.3385±0.74 | -0.1285±0.04 | | ||
| | ALSTM | Alpha158 (with selected 20 features) | 0.0366±0.00 | 0.2803±0.04 | 0.0478±0.00 | 0.3770±0.02 | 0.0520±0.03 | 0.7115±0.30 | -0.0986±0.01 | | ||
| | GATs | Alpha158 (with selected 20 features) | 0.0355±0.00 | 0.2576±0.02 | 0.0465±0.00 | 0.3585±0.00 | 0.0509±0.02 | 0.7212±0.22 | -0.0821±0.01 | No newline at end of file | ||
| | TFT | Alpha158 (with selected 20 features) | 0.0344±0.00 | 0.2071±0.02| 0.0103±0.00 | 0.0632±0.01 | 0.0638±0.00 | 0.5845±0.8| -0.1754±0.02 | |
There was a problem hiding this comment.
Why isTFT‘s annual return so stable while it's information ratio so variable?
There was a problem hiding this comment.
Wendi calculated the information ratio's std in a wrong way. It should be 0.08 instead of 0.8. I have just updated it.
There was a problem hiding this comment.
I calculated the mean of two 5-runs and typed by hand, and missed a "0"...
There was a problem hiding this comment.
@wendili-cs make sure your rank IC. TFT has high IC & annual return while it's IC is low.
Please double check it though such results ares possible.
|
@Derek-Wds Are these results generated by repeating 20 times of experiments? |
It's not for now. The running for Alpha360 hasn't finished yet. I will update the results when the running's done. |
Further Optimisation For Model
Update benchmark performance
Description
Motivation and Context
How Has This Been Tested?
pytest qlib/tests/test_all_pipeline.pyunder upper directory ofqlib.Screenshots of Test Results (if appropriate):
Types of changes