[Computer-go] AlphaGo Zero

uurtamo . uurtamo at gmail.com
Fri Oct 20 12:12:58 PDT 2017


This sounds like a nice idea that is a misguided project.

Keep in mind the number of weights to change, and the fact that "one factor
at a time" testing will tell you nearly nothing about the overall dynamics
in a system of tens of thousands of dimensions. So you're going to need to
do something like really careful experimental design across many dimensions
simultaneously (node weights) and several million experiments -- each of
which will require hundreds if not tens of thousands of games to find the
result of the change. Worse, there are probably tens of millions of neural
nets of this size that will perform equally well (isomorphisms plus minor
weight changes). So many changes will result in no change or a completely
useless game model.

"modeling through human knowledge" neural nets doesn't sound like a
sensible goal -- it sounds more like a need to understand a topic in a
language not equipped for it without a simultaneous desire to understand a
topic under its own fundamental requirements in its own language.

Or you could build a machine-learning model to try to model those
changes.... except that you'd end up where you started, roughly. Another
black box and another frustrated human.

Just accept that something awesome happened and that studying those things
that make it work well are more interesting than translating coefficients
into a bad understanding for people.

I'm sorry that this NN can't teach anyone how to be a better player through
anything other than kicking their ass, but it wasn't built for that.

s.


On Fri, Oct 20, 2017 at 8:24 AM, Robert Jasiek <jasiek at snafu.de> wrote:

> On 20.10.2017 15:07, Adrian.B.Robert at gmail.com wrote:
>
>> 1) Where is the semantic translation of the neural net to human theory
>>> knowledge?
>>>
>> As far as (1), if we could do it, it would mean we could relate the
>> structures embedded in the net's weight patterns to some other domain --
>>
>
> The other domain can be "human go theory". It has various forms, from
> informal via textbook to mathematically proven. Sure, it is also incomplete
> but it can cope with additions.
>
> The neural net's weights and whatnot are given. This raw data can be
> deciphered in principle. By humans, algorithms or a combination.
>
> You do not know where to start? Why, that is easy: test! Modify ONE weight
> and study its effect on ONE aspect of human go theory, such as the
> occurrance (frequency) of independent life. No effect? Increase the
> modification, test a different weight, test a subset of adjacent weights
> etc. It has been possible to study semantics of parts of DNA, e.g., from
> differences related to illnesses. Modifications on the weights is like
> creating causes for illnesses (or improved health).
>
> There is no "we cannot do it", but maybe there is too much required effort
> for it to be financially worthwhile for the "too specialised" case of Go?
> As I say, a mathematical proof of a complete solution of Go will occur
> before AI playing perfectly;)
>
> So far neural
>> nets have been trained and applied within single domains, and any
>> "generalization" means within that domain.
>>
>
> Yes.
>
> --
> robert jasiek
>
> _______________________________________________
> Computer-go mailing list
> Computer-go at computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20171020/19e6db66/attachment.html>


More information about the Computer-go mailing list