This repository was archived by the owner on Dec 11, 2020. It is now read-only.

Description
Hi
Thank you for this amazing repository! Super useful.
I would like to step-through both server/client code on single GPU machine to better understand the code. Could you advise on which hyperparameteres to look at first? My current plan is to reduce NN size, num MCTS iterations and reduce minimum queue sizes. But it's quite difficult to "guesstimate" what to change.
The issue is training scripts assume large computational cluster being available. The default setup does not seem to do single NN update after 12h of "training". Server shows "Stats 550/0/0" slowly increment. I assume it is trying to fill up minimum queue length before starting training.
Would it be possible to provide alternative startup scripts for 9x9 or 5x5 board (3x3?), preferably with minimum queue sizes and other hyperparameters adjusted such that first train cycle kicks in first couple minutes? In a way that still utilises same code pathways as normal 19x19 version.
Thanks again for great work!