[Computer-go] Creating the playout NN
skaitschick at gmail.com
Sun Jun 12 01:56:40 PDT 2016
I don't know how the added training compares to direct training of the
It's prob. not so important, because both should be much faster than the
training of the deep NN.
Accuracy should be slightly improved.
Together, that might not justify the effort. But I think the fact that you
can create the mimicking NN, after the deep NN has been refined with self
play, is important.
On Sun, Jun 12, 2016 at 9:51 AM, Petri Pitkanen <petri.t.pitkanen at gmail.com>
> Would the expected improvement be reduced training time or improved
> 2016-06-11 23:06 GMT+03:00 Stefan Kaitschick <stefan.kaitschick at hamburg.de
>> If I understood it right, the playout NN in AlphaGo was created by using
>> the same training set as the one used for the large NN that is used in the
>> tree. There would be an alternative though. I don't know if this is the
>> best source, but here is one example: https://arxiv.org/pdf/1312.6184.pdf
>> The idea is to teach a shallow NN to mimic the outputs of a deeper net.
>> For one thing, this seems to give better results than direct training on
>> the same set. But also, more importantly, this could be done after the
>> large NN has been improved with selfplay.
>> And after that, the selfplay could be restarted with the new playout NN.
>> So it seems to me, there is real room for improvement here.
>> Computer-go mailing list
>> Computer-go at computer-go.org
> Computer-go mailing list
> Computer-go at computer-go.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go