You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+10-1Lines changed: 10 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,16 @@
1
1
# Ranger-Deep-Learning-Optimizer
2
2
Ranger - a synergistic optimizer combining RAdam (Rectified Adam) and LookAhead in one codebase.
3
3
4
-
Latest version 9.3.19 - full refactoring for slow weights and one pass handling (vs two before). Refactor should eliminate any random save/load issues regarding memory.
4
+
Latest version 9.3.19 - full refactoring for slow weights and one pass handling (vs two before). Refactor should eliminate any random save/load issues regarding memory.
5
+
6
+
----------------------------------------------
7
+
**Beta Version - Ranger913A.py:
8
+
9
+
For anyone who wants to try this out early, this version changes from RAdam to using calibrated anistropic adaptive learning rate per this paper:
10
+
https://arxiv.org/abs/1908.00700v2
11
+
"Empirical studies support our observation of the anisotropic A-LR and show that the proposed methods outperform existing AGMs and generalize even better than S-Momentum in multiple deep learning tasks."
12
+
Initial testing looks very good for training stabilization. Any feedback in comparsion with current Ranger (9.3.19) is welcome!
0 commit comments