[Computer-go] CGOS source on github

David Wu lightvector at gmail.com
Fri Jan 22 06:39:24 PST 2021


@Claude - Oh, sorry, I misread your message, you were also asking about
ladders, not just liberties. In that case, yes! If you outright tell the
neural net as an input whether each ladder works or not (doing a short
tactical search to determine this), or something equivalent to it, then the
net will definitely make use of that information, There are some bad side
effects even to doing this, but it helps the most common case. This is
something the first version of AlphaGo did (before they tried to make it
"zero") and something that many other bots do as well. But Leela Zero and
ELF do not do this, because of attempting to remain "zero", i.e. free as
much as possible from expert human knowledge or specialized feature
crafting.


On Fri, Jan 22, 2021 at 9:26 AM David Wu <lightvector at gmail.com> wrote:

> Hi Claude - no, generally feeding liberty counts to neural networks
> doesn't help as much as one would hope with ladders and sekis and large
> capturing races.
>
> The thing that is hard about ladders has nothing to do with liberties - a
> trained net is perfectly capable of recognizing the atari, this is
> extremely easy. The hard part is predicting if the ladder will work without
> playing it out, because whether it works depends extremely sensitively on
> the exact position of stones all the way on the other side of the board. A
> net that fails to predict this well might prematurely reject a working
> ladder (which is very hard for the search to correct), or be highly
> overoptimistic about a nonworking ladder (which takes the search thousands
> of playouts to correct in every single branch of the tree that it happens
> in).
>
> For large sekis and capturing races, liberties usually don't help as much
> as you would think. This is because approach liberties, ko liberties, big
> eye liberties, shared liberties versus unshared liberties, throwin
> possibilities all affect the "effective" liberty count significantly. Also
> very commonly you have bamboo joints, simple diagonal or hanging
> connections and other shapes where the whole group is not physically
> connected, also making the raw liberty count not so useful. The neural net
> still ultimately has to scan over the entire group anyways, computing these
> things.
>
> On Fri, Jan 22, 2021 at 8:31 AM Claude Brisson via Computer-go <
> computer-go at computer-go.org> wrote:
>
>> Hi. Maybe it's a newbie question, but since the ladders are part of the
>> well defined topology of the goban (as well as the number of current
>> liberties of each chain of stone), can't feeding those values to the
>> networks (from the very start of the self teaching course) help with large
>> shichos and sekis?
>>
>> Regards,
>>
>>   Claude
>> On 21-01-22 13 h 59, Rémi Coulom wrote:
>>
>> Hi David,
>>
>> You are right that non-determinism and bot blind spots are a source of
>> problems with Elo ratings. I add randomness to the openings, but it is
>> still difficult to avoid repeating some patterns. I have just noticed that
>> the two wins of CrazyStone-81-15po against LZ_286_e6e2_p400 were caused by
>> very similar ladders in the opening:
>> http://www.yss-aya.com/cgos/viewer.cgi?19x19/SGF/2021/01/21/733333.sgf
>> http://www.yss-aya.com/cgos/viewer.cgi?19x19/SGF/2021/01/21/733301.sgf
>> Such a huge blind spot in such a strong engine is likely to cause rating
>> compression.
>>
>> Rémi
>>
>> _______________________________________________
>> Computer-go mailing listComputer-go at computer-go.orghttp://computer-go.org/mailman/listinfo/computer-go
>>
>> _______________________________________________
>> Computer-go mailing list
>> Computer-go at computer-go.org
>> http://computer-go.org/mailman/listinfo/computer-go
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20210122/4ade74ac/attachment-0001.htm>


More information about the Computer-go mailing list