[Computer-go] Dave Devos vs Fuego
terrymcintyre at yahoo.com
Fri Nov 26 09:17:29 PST 2010
Martin, how close were the 2nd and 3rd choices?
Looking at the bigger picture: is it yet possible for programs to do automatic
post-game reviews of losing games - possibly spending a few hours or days on a
single game - and locate weak spots in their own behavior?
Another question: at what point did Fuego realize that it had blundered? ( when
did the "winning percentage" show a sharp decline? ) Did it "know" that it was
ahead prior to that weak move which arguably cost it the game?
How close do such estimates correlate with the estimate of strong players? When
they differ significantly, is it possible to say whether the program or the
players are more correct?
Terry McIntyre <terrymcintyre at yahoo.com>
Unix/Linux Systems Administration
Taking time to do it right saves having to do it twice.
From: "dave.devos at planet.nl" <dave.devos at planet.nl>
To: computer-go at dvandva.org
Sent: Fri, November 26, 2010 11:41:16 AM
Subject: Re: [Computer-go] Dave Devos vs Fuego
Re: [Computer-go] Dave Devos vs Fuego
Ok, so it's probably specific to Fuego like David Fotland and Magnus said.
Glad I could help ;)
Van: computer-go-bounces at dvandva.org namens Martin Mueller
Verzonden: vr 26-11-2010 16:49
Aan: computer-go at dvandva.org
Onderwerp: Re: [Computer-go] Dave Devos vs Fuego
I ran current Fuego on my laptop for 600K simulations. Similar to the other
programs, it likes simply C7 with a .82 evaluation. However, the bad moves M19
and N19 are second and third in number of simulations. So there seems to be some
systematic problem with the playouts and/or the tree search here, which makes
this white group die often after those moves. Strange since N19 is not even a
Thank you for the test case.
Computer-go mailing list
Computer-go at dvandva.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go