Theory/Research

The Depth Project

Last month I attended a conference on computational modeling in games in Banff. It was a great opportunity to jam on a research question that I’ve been working on with NYU colleagues Andy Nealen, Julian Togelius, and Aaron Isaksen. We were able to share our thinking and get input from a lot of super smart people, especially Alex Jaffe, of Spry Fox, who it turns out has been working independently on this same question for a while and had lots of very useful ideas to contribute.

The question, in a nutshell – can we discover a well-defined property of game systems that corresponds, roughly, to what designers and players mean by “depth”? We’re talking about depth in a very narrow sense, the depth of abstract strategy games, not the broader, more ambiguous sense of depth as applied to thematic richness and emotional nuance. We are only looking at the very focused domain of game-as-problem. What allows some problems to provide a lifetime of study and continually provide surprising and interesting things to think about the more you think about them? Here is a video of the brief introduction to this topic that I gave at Banff. (Not sure why they added “and fun” to the title, we aren’t really interested in that!)

What follows is an overview of where our thinking is on this topic. This is a reflection of the thoughts of all the people mentioned above, filtered through my own perspective, so credit to them for anything clever and I’ll take the blame for any blunders. Eventually this will turn into a formal paper, but in the meantime here’s where we’re at:

Why Depth?

First of all, it seems to us to be an obvious question to ask. There are well-defined formal properties of game systems that can be analyzed and measured, things like state space and branching factor. We often talk about depth as if it were a property like this, but is it? Is there some precisely definable, objectively observable property that can tell us something interesting about a game’s underlying structure?

This is not a normative project. We don’t want to make claims about what structure games should have or what makes a game good. It’s very important to make this caveat repeatedly and clearly, because of how, in casual usage, “deep” is often used as a general superlative.

Similarly, we’re not looking for a threshold, a specific limit which separates “deep” games from “non-deep” games. Instead we are looking for some measurable quality, we might call it “d”, which would be present in all games, just to different degrees.

We’d also, if possible, like to sidestep the issue of psychology. We aren’t primarily interested in how humans learn or what humans find interesting. We want to know if we can make observations about certain properties of a game’s formal system that are true regardless of the particular details of human cognition. True in a way that would be recognized by aliens, in the same way that observations about state space complexity and branching factor are true.

If we are able to identify such a property, it may provide some insight into questions of artificial intelligence and machine aesthetics. If we can describe what makes certain problems interesting in this particular way then we may learn something about “interestingness” in general, about the difference between data and meaning. Deep games require knowledge, explanation, creativity, and cleverness. Modeling depth may help us better understand how to think about these things.

Such a model could also inform our discussions about game design, suggesting directions for design exploration and expanding our palette of analytical tools and techniques.

However, for me personally, the primary purpose of this project is pure knowledge. I want to know if there is some specific structural quality that is shared by Chess and Go and Poker and Bridge and Magic: the Gathering, some recognizable topological feature of their state space terrain. And if there is then I want to see it, and be able to talk about it and reason about its ramifications.

What depth isn’t

I’ve mentioned state space and branching factor several times already. Could it be that one of these features simply is the property we’re looking for? Or some simple combination of the two?

The reason these features aren’t good candidates for this property is that it’s easy to imagine counterexamples – you can take any game and inflate its state space by adding lots of extra states that the game can be in. If these states aren’t relevant to the problem of figuring out how to win then they won’t provide any of the qualities we are looking for – they won’t make the game more challenging and more interesting to study. Imagine a version of Chess where after you move a piece you can choose to flip a two-sided token to either its blue side or its red side. You’ve doubled the state space of Chess (there are now two distinct versions of every position the game can be in) but you obviously haven’t increased its depth. You can run the same thought experiment for branching factor, adding a bunch of pointless choices that don’t truly matter to the outcome of the game.

What we’re looking for is meaningful state space, meaningful branching factor. But this begs the question – what, after all, makes game states and choices meaningful? It is our assumption that state space and branching factor are necessary but not sufficient to explain the elusive property we are in search of.

Likewise, the more general property of computational complexity is clearly relevant but does not by itself do the conceptual work we need. You can have a game whose solution is arbitrarily difficult to compute without guaranteeing that the game will have the kind of features we are looking for – the capacity to reward life-long learning and support a large, long-term community of serious, dedicated players.

Evidence of depth – the skill chain

So what exactly are these features? Can we get more precise about them?

A key concept in addressing this question is the idea of the skill chain, as articulated in Characteristics of Games, by Elias et al. The size of a game’s skill chain is the number of distinct steps in the ranking of all players, where players at each step beat all the players at lower steps some significant percentage of the time.

Why is this important? Well, what it indicates is a game in which a player can improve through study, the more you think about the game the better you get. Players of a game like this are on a journey of gradual and continuous improvement, ascending ever-upwards towards better understanding and stronger play. Trivially easy games can’t support long skill-chains, and neither can all-or-nothing puzzles, no matter how large and difficult they might be.

In order to make this quality more precise and measurable, we imagine a skill chain made up of different algorithms called strategies, each strategy is a complete set of instructions for how to play the game including when to search ahead in the game tree, how to evaluate board positions, what rules of thumb to apply when making decisions, when to just guess randomly, etc…

The more complex these strategies become, the more computational resources they require to execute – time and/or memory, and the better win rates they achieve against simpler strategies.

skill chain graph

Using this picture, we can frame “d” as the capacity for a game system to allow for a ranked population of strategies that provide partial/approximate solutions. The more discrete ranks in such a population, the greater the degree of this property within the game. In other words, the more a game has this property the more it is susceptible to a sequence of partial solutions, each one more complex, requiring more time and/or memory to execute, and providing a better approximate solution to the problem of the game.

On the one hand, this is really just a precise re-statement of the original problem, we’ve just clarified the thing we went looking for in the first place. Then again, such a clarification is actually a very useful conceptual tool. For example, we are actually starting to build skill chains of this type for very simple games and observe how they change under different conditions.

Search and Heuristics – an Information Theoretical Approach

But this feature of a game’s skill chain is still just a consequence of d. We still want to understand what is the underlying property of the system that leads to this result.

One question to ask is – which kinds of problems lend themselves to partial/approximate solutions and which don’t?

We know that every game of the type we are looking at has some theoretical perfect strategy, some algorithm that tells you exactly what move to make at any point. In fact, we know how to produce this solution in theory (although it is far from practical in any substantial game): As long as it is a game of perfect information, you can calculate out the entire game tree, find a winning node and work backwards looking for a path that your opponent can never force you off of.

Imagine a game in which this were the only way to play. Whatever your position, the only way to do better than choosing a move at random is to look ahead in the game tree, calculating move by move until you see the necessary outcome of each move. A game of this type would only have a single dot on the skill chain graph, there’s only one possible strategy – perfect play via raw look-ahead calculation.

What’s missing from this type of game is heuristics – the rules of thumb that players use as a shortcut through the intractable problem of fully searching the game tree: “control the center” and “doubled pawns are bad” in Chess, “hane at the head of two stones” and “empty triangles are bad” in Go.

Heuristics take advantage of regularities in the game tree to guide the player towards areas of state space where winning paths are statistically denser. It’s not true that every Chess position in which you control the center squares leads to a win, but it is true that, on average, these positions contain shorter and more certain paths to winning outcomes. In a sense the heuristic operates as a kind of compression function on the game tree – a map that reveals structural features in the underlying tree. A fuzzy, blurry, tattered map, but a map nonetheless.

In fact, in some ways, heuristics simply are the partial strategies we are talking about. The more heuristics a game can support the more you can combine them into better and better strategies and the more you can improve with effort and study.

But wait – it’s not quite that simple.

Imagine a game in which there was a heuristic that allowed you to play perfectly. In such a game you would never need to search the game tree at all, whatever the situation you would simply apply this one weird trick. This is the opposite way in which a game can fail to support a long skill chain, by having a simple strategy that exploits an underlying regularity of the game tree to make the problem trivial.

Both of these theoretical depth-lacking games – the one that requires pure search, and the one that allows for one simple heuristic – share an interesting property. In both cases the description of how to find the best move is short. In the first game, the description is “search the entire game tree and find the winning move”, in the second game the description is “always move downhill” (or whatever). Even though the process indicated by the first description is impossibly difficult and that of the second trivially easy, the descriptions themselves are both short.

But in deep games the best description for finding the best move is long and complicated. It will require a mix of heuristics (you probably want to move downhill here) and raw look ahead (actually you need to go uphill here because look – otherwise this bad thing will happen).

We now believe that the quality we are looking for is a kind of semi-orderedness of the state space, a level of underlying structure that allows for some useful compression of the game tree but not so much that raw search is never needed.

We can frame this in terms of entropy. The best way of looking for something, like a best move, depends on how ordered the space you are searching through is. If the space is completely disorganized then there’s no better way to find the item than to examine every possibility. This corresponds to high entropy. If the space is well-organized I can give you precise instructions about where to look. This corresponds to low entropy.

high entropy = weak heuristics, all search
low entropy = strong heuristics, no search

entropy graph

The description of the best way to find something in a semi-ordered space is long and complicated. Deep games are semi-ordered. This is what the literature that a deep game produces is made of, all the strategy books and blog posts and videos and forum arguments are part of this lengthy description of the most efficient method for finding the best move in a semi-ordered system under conditions of computational constraint.

Final Thoughts

If this is the right direction for thinking about this issue then playing a deep game will involve a complex dance between heuristics and pure search. And sure enough, listening to the real-time thought process of an expert player often reveals just this – the compressed knowledge of proverbs, patterns, and rules of thumb alternating with periods of raw, if/then, move-by-move calculation.

Consider also the “killer move” – the unexpected, spectacular, high-impact play. What makes a killer move surprising and impressive is the way it deviates from a strong heuristic, indicating that heuristic’s limitations. (Watch, for example, Michael Redmond doing a triple take in response to AlphaGo’s 4th line shoulder hit in game 2.) Killer moves demonstrate how raw search can outperform even the most powerful heuristic. How to avoid the trap of a local maximum in solution space? Search!

Speaking of AlphaGo, it is interesting to note that the architecture of the world’s best game-playing AI reflects this same search/heuristic balance, its Monte Carlo tree search uses heuristics to channel the raw power of pure look-ahead, efficiently searching the game tree for directions with the greatest average value, and its board-evaluation functions are heuristics baked into its neural net, patterns discovered through millions of real and simulated games.

The human brain itself may demonstrate a similar structure. In Thinking Fast and Slow psychologist Daniel Kahneman outlines the two main modules of human congnition – system 1 (snap judgments, intuition, reflex) and system 2 (slow, step-by-step, logical calculation.) System 2 corresponds to the theoretically perfect but impractically slow process of pure search, while System 1 corresponds to the compression of reason into powerful but flawed heuristics.

If you allow me to get even more vague and philosophical for a moment, I might even suggest that we see something of the same dynamic in the contrast between two of the world’s most influential systems of moral judgment – utilitarianism (raw search) and Kantian deontology (heuristics).

Now that I think of it, even Hearts and Minds, my own earlier foray into reason and emotion in games seems, in retrospect, to express this exact same idea. Weird.

Anyway, that’s roughly where we are with this project, minus a lot of related issues and specific details. We are looking to put this research into a published paper sometime in the near future. Stay tuned for further developments.