James Nathan: Brief Interview with Dr. Cameron Browne


As part of this week’s posts, Dr. Browne was kind enough to entertain some questions that I had, and here’s what he had to say.  I hope you’ve enjoyed this week’s series.

What inspired the LUDI project?

While working at a rather boring programming job in 2000, I wrote a book on Hex. This was originally meant to be an article… which grew into a book chapter… which grew into a complete book. The more I read about this fascinating game the more surprised I was that nobody had written a book on it before.

I then continued to research Hex and its variants and found a whole new genre of hundreds of related games, which led to a book on Connection Games. I don’t think anyone had realised how many connection games there were until then, but what struck me was how many of these games used the same basic rules just combined in different ways and applied to different geometries. As a programmer, it seemed obvious to me a computer could take these rules, recombine them in various ways, then test for new and interesting games; and so the LUDI project was born.  

What was the biggest hurdle in the project?

The biggest technical hurdle was measuring the evolved games for quality, i.e. their potential to interest human players. Game design is more of an art than a science; players can usually say whether they like a game or not, but not always why. Further, all players have different preferences in the games they play. Showing a way to measure such subtle aesthetic distinctions mathematically took up the bulk of the work on my thesis and was probably its  greatest technical contribution.

From a personal viewpoint, the biggest hurdle was simply finding the time and motivation to do the thesis while working full time. This was a “hobby” PhD that I was doing in my spare time while working another programming job, complicated by the fact that I relocated from Australia to the UK midway through. But weekly nudges from my wonderful supervisor Frederic Maire made sure that I kept plugging away until it was done.

How long did running LUDI for the 1300+ games take?

The LUDI experiments ran for two weeks on three computers on two continents (two at my house in London and one in Frederic’s office in Brisbane). These were standard desktop machines of the time, nothing special. Each of the three runs constituted its own gene pool, so I had to collate all three final populations into a single collection at the end of the experiment, but I don’t recall any collisions. The vast majority of the computing time was spent measuring the games for quality (several minutes to hours per game).

Note that each new game was easy to create — by crossing over the rules of two parent games and mutating the child — and only took milliseconds. However, most of the resulting children were poorly formed in some way and discarded, e.g. rules with invalid parameters or that violated the grammar in some other way. So countless millions of candidate games were actually generated to create the 1,300+ that actually went on to be evaluated. Unfortunately, I didn’t think to record the statistics on the total number of games generated.

Was this a slow cooker “set it and forget” process or was manual input required?

It was a slow cooker — we launched our programs, then sat back to see what they produced. I did stop my programs a few times throughout the process just to check the results to see if anything interesting was happening, but each time I restarted them with their current populations as if no interruption had occurred.

The only manual input was in the choice of games for the initial population; that’s one of the beauty of evolutionary approaches. But in the interest of getting the widest range of results possible, I just threw in every game I could think of that could be described in the LUDI language in the time available, and started it up. The were the 79 “source” games.

How much did the survey results of the 79 games match your own tastes?

I have no idea! I don’t recall making this comparison at the time, perhaps due to the scientists’ instinct not to bias the experiment, but more likely because I was lazy and things were getting hectic by then (I ran the experiments the month before the thesis deadline, then wrote it up in two weeks).  

Looking back on the ranking of the source games now, I’d say that it does agree with my tastes to a large extent, though I’d rank some of the Hex variants more highly and some of N-in-a-row games less highly. Another factor here is that the LUDI AI played some games better than others, and I suspect that this introduced some bias as players would probably enjoy games against a competent opponent more. For example, the LUDI AI was very good at N-in-a-row games, and during the experiments this was by far the type of game that it preferred to evolve.

With advances in computer processing power since LUDI, are you ever tempted to allow it to run longer and see what develops?

I have been tempted from time to time, but LUDI is in the end depressingly limited in the range of games that it can express and play competently. It served its role a proof-of-concept that the general approach works, but I’m now in the process of developing a much more comprehensive system that should give even better results. Details below…

What is the origin myth of your attraction to combinatorial games?

I’ve always been fascinated by games, puzzles and mathematical curiosities. Then as a teenager I bought the book by Martin Gardner that includes the chapter on Hex and I was hooked. How could a game with such simple rules produce such complexity and strategic depth?

How did LUDI’s results, outside of Pentalath and Yavalath, influence your later designs?

By pretty much stopping my output as a game designer! This is the unfortunate effect of turning a hobby into a career, sometimes what used to be fun is now work. This is a common phenomenon, for example I know professors of literature who got into that line of work through their love for books but no longer have time to read for pleasure.

Of course even the work can still be fun, but the focus is shifted. The few games that I’ve designed since doing my PhD tend to be experimental games designed to test some theory or with some other research purpose in mind, e.g. Shibumi, Hanoi, Try, Ludoku, etc.

Did you do any cursory review of LUDI results outside of the Big 19? Any mechanics that surprised you?

No, I only focussed on the 19 “playable” games for the purposes of the analysis. And once the thesis was submitted and I pretty much put the whole thing away and had a rest, and tried to catch up with the other aspects of my life that I’d had to neglect.

It could be interesting to trawl through the rest of the “unplayable” games, but the problem is that since none of these games work very well then the individual mechanisms would be hard to evaluate. And simply reading through the rules would not be sufficient; you really need to see the game being played and the rules in action to fully appreciate them.

It could be that there are some brilliant mechanisms hidden in there that could make brilliant games with just a small adjustment. But that’s the different problem of game optimisation.

What was the most annoying part of LUDI?

The most annoying part was realising how limited it was in terms of the scope of games that it could support, despite all my efforts to make it general. This really struck home when I was adding new games to the “source” set in preparation for the evolutionary run, and it seemed that most new games that I wanted to add required new rules to be added to the language. Not a good sign in a supposed general game system.

There are over 200 keywords in the LUDI language, each representing a rule or piece of equipment, but even that was only sufficient to describe a small subset of a particular type of game, i.e. combinatorial games or turn-based, finite, deterministic, perfect information, zero-sum games.

What is the current focus of your research?

I’ve just started a new research project at Maastricht University called the Digital Ludeme Project, funded by a five year grant from the European Research Council (ERC).

This project will involve developing a new general game system called LUDII, based on the principles of LUDI but improved in every way, to model the world’s traditional games throughout recorded human history in a single playable database. I plan to then apply phylogenetic analysis to model the evolution of traditional games, and explore the relationship between the development of games and the development of human culture and the spread of mathematical ideas. Here’s a link to the project’s web site (still under development): http://ludeme.eu

LUDII itself should provide a powerful platform for general game playing, and will hopefully provide a useful tool for game designers as well as researchers. I hope that it will eventually be able to automatically test games, detect problems, improve rule sets, design new games, etc. LUDI was the proof-of-concept, but LUDII will be the real thing.

What’s your favorite thing about what you do?

The chance to explore these ideas and run projects like the Digital Ludeme Project! This is what excites me and keeps my brain occupied most of my waking hours. And the time seems perfect for this type of research: computing power is orders of magnitude cheaper and more available than not so long ago, AI is booming as a hot research topic, and game AI in particular is maturing into a respected research field that’s attracting significant funding. But most importantly, it’s very satisfying being able to work each day on my own ideas rather than somebody else’s.

Related Reviews:

Monday – Cameron Browne’s “Automatic generation and evaluation of recombination games”
Tuesday – Yavalath and Manalath
Wednesday – Ndengrod/Pentalath, Valion, and Elrostir
Thursday – Volo, Feed the Ducks, and assorted puzzles

 

This entry was posted in Interviews. Bookmark the permalink.