[Computer-go] Converging to 57%
alvaro.begue at gmail.com
Tue Aug 23 06:44:23 PDT 2016
There are situations where carefully crafting the minibatches makes sense.
For instance, if you are training an image classifier it is good to build
the minibatches so the classes are evenly represented. In the case of
predicting the next move in go I don't expect this kind of thing will make
much of a difference.
I got to around 52% on a subset of GoGoD using ideas from the ResNet paper (
https://arxiv.org/abs/1512.03385). I used 128x20 followed by 64x20 and
finally 32x20, with skip connections every two layers. I started the
training with Adam(1e-4) and later on I lowered it to 1e-5 and eventually
1e-6. The only inputs I use are the signed count of liberties (positive for
black, negative for white), the age of each stone capped at 8, and a block
of ones indicating where the board is.
I'll be happy to share some code if people are interested.
On Tue, Aug 23, 2016 at 7:29 AM, Gian-Carlo Pascutto <gcp at sjeng.org> wrote:
> On 23/08/2016 11:26, Brian Sheppard wrote:
> > The learning rate seems much too high. My experience (which is from
> > backgammon rather than Go, among other caveats) is that you need tiny
> > learning rates. Tiny, as in 1/TrainingSetSize.
> I think that's overkill, as in you effectively end up doing batch
> gradient descent instead of mini-batch/SGD.
> But yes, 0.01 is rather high with momentum. Try 0.001 for methods with
> momentum, and with the default Adam parameters you have to go even lower
> and try 0.0001.
> > Neural networks are dark magic. Be prepared to spend many weeks just
> > trying to figure things out. You can bet that the Google & FB results
> > are just their final runs.
> As always it's sad nobody publishes what didn't work saving us the time
> of trying it all over again :-)
> > Changing batching to match DarkForest style (making sure that a
> > minibatch contains samples from game phases... for example
> > beginning, middle and end-game).
> This sounds a bit suspicious. The entries in your minibatch should be
> randomly selected from your entire training set, so statistically having
> positions from all phases would be guaranteed. (Or you can shuffle the
> entire training set before the epoch, instead of randomly picking during
> Don't feed the positions in in-order or from the same game...
> Computer-go mailing list
> Computer-go at computer-go.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go