[Computer-go] mini-max with Policy and Value network

Álvaro Begué alvaro.begue at gmail.com
Tue May 23 08:36:01 PDT 2017


On Tue, May 23, 2017 at 4:51 AM, Hideki Kato <hideki_katoh at ybb.ne.jp> wrote:

> (3) CNN cannot learn exclusive-or function due to the ReLU
> activation function, instead of traditional sigmoid (tangent
> hyperbolic).  CNN is good at approximating continuous (analog)
> functions but Boolean (digital) ones.
>

Oh, not this nonsense with the XOR function again.

You can see a neural network with ReLU activation function learning XOR
right here: http://playground.tensorflow.org/#activation=relu&
batchSize=10&dataset=xor&regDataset=reg-plane&learningRate=0.01&
regularizationRate=0&noise=0&networkShape=4,4&seed=0.96791&
showTestData=false&discretize=false&percTrainData=50&x=true&
y=true&xTimesY=false&xSquared=false&ySquared=false&cosX=
false&sinX=false&cosY=false&sinY=false&collectStats=false&
problem=classification&initZero=false&hideText=false

Enjoy,
Álvaro.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20170523/5c29f2c4/attachment.html>


More information about the Computer-go mailing list