[Computer-go] CGOS source on github
uurtamo at gmail.com
Mon Jan 18 07:26:26 PST 2021
It's a relative ranking versus who you actually get to play against.
Sparsity of actual skill will lead to that kind of clumping.
The only way that a rating could meaningfully climb by playing gnugo or
your direct peers is going to happen exponentially slowly -- you'd need to
lose to gnugo twice less often (or win all the time over twice as many
games) to get more points. So although it would eventually increase, it
would flatten out pretty quickly.
Good point about mcmc. A more dramatic approach would be to remove gnugo
On Mon, Jan 18, 2021, 6:41 AM Rémi Coulom <remi.coulom at gmail.com> wrote:
> Thanks to you for taking care of CGOS.
> I have just connected CrazyStone-57-TiV. It is not identical, but should
> be similar to the old CrazyStone-18.04. CrazyStone-18.04 was the last
> version of my program that used tensorflow. CrazyStone-57 is the first
> neural network that did not use tensorflow, running with my current code.
> So it should be stronger than CrazyStone-18.04, and I expect it will get a
> much lower rating.
> A possible explanation for the rating drift may be that most of the old MC
> programs have disappeared. They won easily against GNU Go, and were easily
> beaten by the CNN programs. The Elo statistical model is wrong when
> different kind of programs play against each other. When the CNN program
> had to get a rating by playing directly against GNU Go, they did not manage
> to climb as high as when they had the MC programs between them and GNU Go.
> I'll try to investigate this hypothesis more with the data.
> Computer-go mailing list
> Computer-go at computer-go.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go