[Computer-go] Zero performance
gcp at sjeng.org
Fri Oct 20 23:32:55 PDT 2017
On 20/10/2017 22:41, Sorin Gherman wrote:
> Training of AlphaGo Zero has been done on thousands of TPUs,
> according to this source:
> Maybe that should explain the difference in orders of magnitude that
> you noticed?
That would make a lot more sense, for sure. It would also explain the
25M USD number from Hassabis. That would be a lot of money to spend on
"only" 64 GPUs, or 4 TPU (which are supposed to be ~1 GPU).
There's no explanation where the number came from, but it seems that he
did similar math as in the original post here.
More information about the Computer-go