[Computer-go] mini-max with Policy and Value network

Hideki Kato hideki_katoh at ybb.ne.jp
Tue May 23 16:39:04 PDT 2017

Alvaro Begue: <CAF8dVMVMwi65m9jMTsvOa=qZorTQz-DEdh5494UWzeLd9sUL+Q at mail.gmail.com>:
>On Tue, May 23, 2017 at 4:51 AM, Hideki Kato <hideki_katoh at ybb.ne.jp> wrote:
>> (3) CNN cannot learn exclusive-or function due to the ReLU
>> activation function, instead of traditional sigmoid (tangent
>> hyperbolic).  CNN is good at approximating continuous (analog)
>> functions but Boolean (digital) ones.
>Oh, not this nonsense with the XOR function again.
>You can see a neural network with ReLU activation function learning XOR
>right here: http://playground.tensorflow.org/#activation=relu&

That NN has no "sharp" edges.  Using sigmoid (hyperbolic tangent) 
activation function, changing weights can change the sharpness 
of the edges of the approximated function.  For ReLU, changing 
weights only changes the slope.

Hideki Kato <mailto:hideki_katoh at ybb.ne.jp>

More information about the Computer-go mailing list