[Computer-go] Elo vs CLOP
drake at lclark.edu
Tue Feb 11 13:51:34 PST 2014
On Tue, Feb 11, 2014 at 1:08 PM, Petr Baudis <pasky at ucw.cz> wrote:
> On Tue, Feb 11, 2014 at 11:42:24AM -0800, Peter Drake wrote:
> > A naive question:
> > In what situations is it better to use Coulom's Elo method vs his CLOP
> > method for setting parameters? It seems they are both techniques for
> > optimizing a high-dimensional, noisy function.
> Do you mean minorization-maximization?
> I'm not sure if it could be
> adapted for optimizing a black-box function sensibly.
I'm still fuzzy on this. Is it limited to boolean inputs (e.g., "include
features 3, 7, and 22")?
> Moreover, it might
> not deal well with noisy observations.
Aren't the recorded-game data (used to find feature weights) noisy?
> But most importantly, it can optimize a function on presampled data,
> while CLOP will perform the sampling itself in order to enhance the fit
> of the quadratic model.
Ahhhh, that makes sense. So CLOP is good for guiding experiments, but when
you're learning from either recorded games or live playouts, it doesn't
> P.S.: I don't understand the details of minorization-maximization so
> maybe I'm wildly off in something.
You're not the only one. My math is much weaker than I would wish.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Computer-go