[Computer-go] Dave Devos vs Fuego

David Fotland fotland at smart-games.com
Fri Nov 26 12:12:38 PST 2010

It's pretty easy to find cases where the AI blunders.  Automatically finding
these is not a bottleneck to improving performance.  The bottleneck is
programmer time to analyze and fix the problems, and test resources to
verify that the fix did not make the program weaker in other situations.  I
have dozens of similar test cases for Many Faces waiting my attention.




From: computer-go-bounces at dvandva.org
[mailto:computer-go-bounces at dvandva.org] On Behalf Of terry mcintyre
Sent: Friday, November 26, 2010 9:17 AM
To: computer-go at dvandva.org
Subject: Re: [Computer-go] Dave Devos vs Fuego


Martin, how close were the 2nd and 3rd choices?


Looking at the bigger picture: is it yet possible for programs to do
automatic post-game reviews of losing games - possibly spending a few hours
or days on a single game - and locate weak spots in their own behavior? 


Another question: at what point did Fuego realize that it had blundered? (
when did the "winning percentage" show a sharp decline? ) Did it "know" that
it was ahead prior to that weak move which arguably cost it the game?


How close do such estimates correlate with the estimate of strong players?
When they differ significantly, is it possible to say whether the program or
the players are more correct? 


Terry McIntyre <terrymcintyre at yahoo.com>

Unix/Linux Systems Administration
Taking time to do it right saves having to do it twice.




From: "dave.devos at planet.nl" <dave.devos at planet.nl>
To: computer-go at dvandva.org
Sent: Fri, November 26, 2010 11:41:16 AM
Subject: Re: [Computer-go] Dave Devos vs Fuego

Ok, so it's probably specific to Fuego like David Fotland and Magnus said.

Glad I could help ;)





Van: computer-go-bounces at dvandva.org namens Martin Mueller
Verzonden: vr 26-11-2010 16:49
Aan: computer-go at dvandva.org
Onderwerp: Re: [Computer-go] Dave Devos vs Fuego

I ran current Fuego on my laptop for 600K simulations. Similar to the other
programs, it likes simply C7 with a .82 evaluation. However, the bad moves
M19 and N19 are second and third in number of simulations. So there seems to
be some systematic problem with the playouts and/or the tree search here,
which makes this white group die often after those moves. Strange since N19
is not even a threat.

Thank you for the test case.

Computer-go mailing list
Computer-go at dvandva.org


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20101126/a3657b10/attachment.html>

More information about the Computer-go mailing list