[Computer-go] Teaching Deep Convolutional Neural Networks to Play Go
davidstarsilver at gmail.com
Tue Mar 17 03:15:27 PDT 2015
Reinforcement learning is different to unsupervised learning. We used
reinforcement learning to train the Atari games. Also we published a more
recent paper (www.nature.com/articles/nature14236) that applied the same
network to 50 different Atari games (achieving human level in around half).
Similar neural network architectures can indeed be applied to Go (indeed
that was one of the motivations for our recent ICLR paper). However,
training by reinforcement learning from self-play is perhaps more
challenging than for Atari: our method (DQN) was applied to single-player
Atari games, whereas in Go there is also an opponent. I could not guarantee
that DQN will be stable in this setting.
On 16 March 2015 at 22:21, Oliver Lewis <ojflewis at yahoo.co.uk> wrote:
> Can you say anything about whether you think their approach to
> unsupervised learning could be applied to networks similar to those you
> trained? Any practical or theoretical constraints we should be aware of?
> On Monday, 16 March 2015, Aja Huang <ajahuang at gmail.com> wrote:
>> Hello Oliver,
>> 2015-03-16 11:58 GMT+00:00 Oliver Lewis <ojflewis at yahoo.co.uk>:
>>> It's impressive that the same network learned to play seven games with
>>> just a win/lose signal. It's also interesting that both these teams are in
>>> different parts of Google. I assume they are aware of each other's work,
>>> but maybe Aja can confirm.
>> The authors are my colleagues at Google DeepMind as on the paper they
>> list DeepMind as their affiliation. Yes we are aware of each other's
> Computer-go mailing list
> Computer-go at computer-go.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go