[Computer-go] Converging to 57%

David Fotland fotland at smart-games.com
Tue Aug 23 22:24:38 PDT 2016

I find that I get a nice bounce (about 2%) in the accuracy by reducing the learning rate by a factor of ten after the accuracy on the test set stops improving. I also found it pretty easy to get 45% with small networks as long as the bottom layer included some 5x5 filters.




From: Computer-go [mailto:computer-go-bounces at computer-go.org] On Behalf Of Brian Lee
Sent: Tuesday, August 23, 2016 7:00 AM
To: computer-go at computer-go.org
Subject: Re: [Computer-go] Converging to 57%


I've been working on my own AlphaGo replication (code on github https://github.com/brilee/MuGo), and I've found it reasonably easy to hit 45% prediction rate with basic features (stone locations, liberty counts, and turns since last move), and a relatively small network (6 intermediate layers, 32 filters in each layer), using Adam / 10e-4 learning rate. This took ~2 hours on a GTX 960.


As others have mentioned, learning shoots up sharply at the start, and there is an extremely slow but steady improvement over time. So I'll experiment with fleshing out more features, increasing size of network, and longer training periods.





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20160823/8231ffd1/attachment.html>

More information about the Computer-go mailing list