[Computer-go] Aya reaches pro level on GoQuest 9x9 and 13x13

Hiroshi Yamashita yss at bd.mbn.or.jp
Fri Nov 18 20:18:05 PST 2016


> Did you not find a benefit from a larger value network? Too little data
> and too much overfitting? Or more benefit from more frequent evaluation?

I did not find larger value network is better.
But I think I need more taraining data and stronger selfplay.
I did not find overfitting so far, and did not try more frequent evaluation.

>> Policy + Value vs Policy, 1000 playouts/move, 1000 games. 9x9, komi 7.0
>> 0.634  using game result. 0 or 1
> I presume this is a winrate, but over what base? Policy network?

Policy network(only root node) + value network  vs  Policy network(only root node).

> How do you handle handicap games? I see you excluded them from the KGS
> dataset. Can your value network deal with handicap?

I excluded hadicap games.
My value network can not handle hadicaps. It it only for komi 7.5.

Hiroshi Yamashita

More information about the Computer-go mailing list