[Computer-go] Deep Blue the end, AlphaGo the beginning?

uurtamo . uurtamo at gmail.com
Fri Aug 18 14:07:20 PDT 2017


I only ask, not to be snippy or impolite, but because I have just exactly
enough knowledge to be dangerous enough to have no freaking idea what I'm
talking about wrt chess research, and by way of introduction, let me say
that I've seen some people talk about (and a coworker at my former
university worked with) strong chess programs and I've done some analysis
with them. I think of them generally as black boxes whose strength gets
more and more complicated to measure since they can only essentially play
themselves anymore in an interesting way. Eventually I imagine it will take
more analysis on our part to understand their games then they are going to
give us. Which I'm fine with.


They run on laptops. A program that could crush a grandmaster will run on
my laptop. That's an assertion I can't prove, but I'm asking you to verify
it or suggest otherwise.

Now the situation with go is different.

Perhaps it's that the underlying problem is harder. But "those old methods"
wouldn't work on this problem. I only mean that in the sense that the exact
code for chess, working with the rules of go, adapated using some
first-pass half-assed idea of what that means, would fail horribly.
Probably both because 64 << 169 and because queen >> 1 stone and for god
only knows how many other reasons.

So let's first get out of the way that this was probably a much harder
problem (the go problem).

I agree that the sharp definition of "machine learning", "statistics",
"AI", "blah blah blah" don't really matter toward the idea of  "computer
game players", etc.

But if we do agree that the problem itself is fundamentally harder, (which
I believe it is) and we don't want to ascribe its solution simply to
hardware (which people tried to do with big blue), then we should
acknowledge that it required more innovation.

I do agree, and hope that you do, that this innovation is all part of a
continuum of innovation that is super exciting to understand.




On Fri, Aug 18, 2017 at 1:31 PM, Gian-Carlo Pascutto <gcp at sjeng.org> wrote:

> On 18-08-17 16:56, Petr Baudis wrote:
> >> Uh, what was the argument again?
> >
> >   Well, unrelated to what you wrote :-) - that Deep Blue implemented
> > existing methods in a cool application, while AlphaGo introduced
> > some very new methods (perhaps not entirely fundamentally, but still
> > definitely a ground-breaking work).
> I just fundamentally disagree with this characterization, which I think
> is grossly unfair to the Chiptest/Deep Thought/Deep Blue lineage.
> Remember there were 12 years in-between those programs.
> They did not just...re-implement the same "existing methods" over and
> over again all that time. Implementation details and exact workings are
> very important [1]. I imagine the main reason this false distinction
> (i.e. the "artificial difference" from my original post) is being made
> is, IMHO, that you're all aware of the fine nuances of how AlphaGo DCNN
> usage (for example) differs compared to previous efforts, but you're not
> aware of the same nuances in Chiptest and successors etc.
> [1] As is speed, another dirty word in AI circles that is nevertheless
> damn important for practical performance.
> --
> _______________________________________________
> Computer-go mailing list
> Computer-go at computer-go.org
> http://computer-go.org/mailman/listinfo/computer-go
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20170818/ca9ea14e/attachment.html>

More information about the Computer-go mailing list