[Computer-go] 7.0 Komi and weird deep search result
jloup at gailly.net
Thu Apr 7 16:21:12 PDT 2011
> But recently my feeling is that current MCTS almost reaches its
> limit on 19x19. We will need another breakthrough/good ideas to
> overcome KGS 5d or 6d.
Give me the source code of Zen and Erica (that are amazingly good with
very little hardware) and let me use 64 machines of 20 cores each, and
I bet the result will be 5d today (not in 10 years). It's only a 2
stones improvement and massive hardware can do this (I've proven it
with pachi). I agree that reaching pro level will be more difficult,
and may require at least 10 years of hardware plus software improvements.
We have gained at least one stone per year for the past few years and
I don't see any reason for this to stop completely. The best programs
are less than 9 stones away from the pros on 19x19. Before MCTS we
were completely blind. Now we have good reasons to be optimistic.
> Imagine that you can make your program strong enough in a months time
> to win 51% of the games against the previous version. If you can do
> that, you will have added something like 5-10 ELO to the strength of
> your program. 5-10 ELO is a small amount of improvement and it is
> so difficult to measure that it requires thousands of games to
> verify. So we are not talking about a herculean task. And yet, if
> you do that for a year you will have added 1 dan of strength to your
> go program! 1 Dan in 1 year is a lot.
> But I don't think the engineers in Go programming think like that yet.
Actually this is exactly what I do with pachi. I always run 5000
19x19 games for each test, and I can reliably detect 10 elo
improvements. More importantly I can also reliably detect 10 elo
regressions, which are about 10 times more frequent than improvements.
Pachi improved by several stones since I started doing this, testing
ideas generally suggested by Petr.
Most of the improvements were below 20 elo. I also tried running
10 000 games per test and found that the extra accuracy was not worth
doubling the test time. And I tried running less than 5000 games
per test and found that the accuracy becomes so poor than it becomes
impossible to detect 10 elo differences reliably.
In short I completely agree with Don and I already apply to go what
has been proven successful in chess.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go