[Computer-go] effectiveness of transposition tables for go

Don Dailey dailey.don at gmail.com
Wed May 12 10:30:53 PDT 2010


Thomas,

You method sound very sensible if I understand it correctly and it seems to
me to be the best.

I'm not clear on the memory collection when you run out of memory.   Are you
saying that you track the "age" of each node, perhaps bumping a global age
every time you create a node and just overwrite entries that are old when
you are looking for a slot?

Don



So basically you don't create a big child list until you are sure you need
it.   I'm not sure I fully understand your description except for the gist
of it.    Do you generate only 1 child at a time or all at once (as soon as
you know the parent will start getting expanded.

On Wed, May 12, 2010 at 1:11 PM, Thomas Lavergne <
thomas.lavergne at reveurs.org> wrote:

>
> Le 11 mai 2010 à 19:12, Mark Boon a écrit :
>
> >
> > On May 11, 2010, at 6:35 AM, David Fotland wrote:
> >
> >> It is quite memory hungry since near the leaves there are many positions
> >> with very few active children.  On 9x9 I let a few playouts go through a
> >> move before expanding it, and that gives me plenty of memory.  I ship
> with a
> >> default of 300 MB reserved for the hash table, and that's enough.
> >
> > So that confirms more or less how I thought hash-tables were implemented
> (although others may still use different methods of course). Using rather
> more memory than a tree implementation. But I see the advantages in terms of
> efficiency.
> >
> > What I do in the tree is when I first expand a node I stick in a small
> place-holder that just has the move and the RAVE values (and a bit more) but
> no child information. Only when a node gets selected because it has the best
> RAVE score does it get replaced by a full node. On average less than 10% of
> the nodes get replaced by a full node.
> >
> > You could probably do something similar with a hash-table. But the
> implementation might get a little more complex.
>
> In fact it is a lot simpler to do with transposition table as I implement
> them because you have this without more coding...
> As I save all the information about childs in the node itself, you can see
> this as a list of light nodes.
>
> When I reach the bottom of the tree, the classical thing is to create the
> first missing node and add it to the transposition table. As I only use
> heavy nodes this would be very expensive on memory.
> But, as the last node keep statistics about his childs, I known how many
> times I've choosen each childs and their winrate or rave values, so I can
> decide to delay the next node creation until these stats satisfy me and
> proves that this path is valuable and must be extended.
>
> You can view the child list keep by each nodes as a set of light nodes in a
> tree. The only difference is that in a tree, when you convert a light node
> to an heavy one, the light one is freed and in transposition table it is
> kept, but as most of the nodes are leaf one, this doesn't cost so much.
>
> Another interesting thing with this kind of transposition table is that,
> when you start expanding a sub-tree for some simulations because it seems
> promising and finally you discover it isn't a good path, you will leave it
> and simulate another part of the tree. If at some point in the future you
> need some memory, the nodes will be collected automatically as they are not
> visited anymore so they start to be old. This is like doing the reverse:
> demoting an heavy node to a light one. And it also come for free.
>
> All this is why I think TT are simpler to implement and not a lot more
> heavier than usual trees.
>
> Tom
> _______________________________________________
> Computer-go mailing list
> Computer-go at dvandva.org
> http://dvandva.org/cgi-bin/mailman/listinfo/computer-go
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://computer-go.org/pipermail/computer-go/attachments/20100512/02f93e04/attachment.html>


More information about the Computer-go mailing list