[Computer-go] Zero is weaker than Master!?
gcp at sjeng.org
Thu Oct 26 14:45:22 PDT 2017
Figure 6 has the same graph as Figure 3 but for 40 blocks. You can compare
On Thu, Oct 26, 2017, 23:35 Xavier Combelle <xavier.combelle at gmail.com>
> Unless I mistake figure 3 shows the plot of supervised learning to
> reinforcement learning, not 20 bloc/40 block
> For searching mention of the 20 blocks I search for 20 in the whole
> paper and did not found any other mention
> than of the kifu thing.
> Le 26/10/2017 à 15:10, Gian-Carlo Pascutto a écrit :
> > On 26-10-17 10:55, Xavier Combelle wrote:
> >> It is just wild guesses based on reasonable arguments but without
> >> evidence.
> > David Silver said they used 40 layers for AlphaGo Master. That's more
> > evidence than there is for the opposite argument that you are trying to
> > make. The paper certainly doesn't talk about a "small" and a "big"
> > You seem to be arguing from a bunch of misreadings and
> > misunderstandings. For example, Figure 3 in the paper shows the Elo plot
> > for the 20 block/40 layer version, and it compares to Alpha Go Lee, not
> > Alpha Go Master. The Alpha Go Master line would be above the flattening
> > part of the 20 block/40 layer AlphaGo Zero. I guess you missed this when
> > you say that they "only mention it to compare on kifu prediction"?
> Computer-go mailing list
> Computer-go at computer-go.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go