[Computer-go] Training an AlphaGo Zero-like algorithm with limited hardware on 7x7 boards

cody2007 cody2007 at protonmail.com
Sat Jan 25 13:52:22 PST 2020


Hi All,

I wanted to share an update to a post I wrote last year about using the AlphaGo Zero algorithm on small boards (7x7). I train for approximately 2 months on a single desktop PC with 2 GPU cards.

In the article I was getting mediocre performance from the networks. Now, I've found that there was a bug in the way that I was evaluating the networks and that what I've been training seems to be matching GNU Go's level of performance.

Anyway, I'm aware I'm not exactly pushing the bounds of what's been done before, but I thought some might be interested to see how one can still get decent performance, at least in my opinion, on extremely limited hardware setups -- orders of magnitude less than what DeepMind (and Leela) have used.

The post where I talk about the model's performance, training, and setup:
https://medium.com/@cody2007.2/how-i-trained-a-self-supervised-neural-network-to-beat-gnugo-on-small-7x7-boards-6b5b418895b7

A video where I play the network and show some of its move probabilities during self-play games:
https://www.youtube.com/watch?v=a5vq1OjZrCU

The model weights and tensorflow code:
https://github.com/cody2007/alpha_go_zero_implementation

-Cody
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20200125/324d5493/attachment.htm>


More information about the Computer-go mailing list