I just finished a little article on the motivation and methods of dumbing down game AIs. It’s particularly interesting in that it makes for a good case in point regarding how cognitive science and traditional AI differ.

The article starts off by commenting on the challenges of less-than-perfect AIs, which is interesting in its own right. Traditional AI is often concerned with the optimal, deterministic, and efficient solutions to a given scenario. As a cognitive psychologist, I’m more concerned with the solution that best matches human performance. And it is this focus on human-like performance that dumbing down the AIs is all about.

The first possible resolution presented (and summarily dismissed) is to reduce the amount of computation performed. This rarely works as it results in completely idiotic behavior that even novices would be loath to exhibit. From a cognitive modeling standpoint, novice/expert distinctions are primarily represented as knowledge base, skill, and learning differences – but the total computational time is relatively unaffected. Novices are novices not because they can’t think as long and as hard about a problem, but because they lack the relevant strategies, experience, and learned optimizations.

Instead, the author argues, AIs should “throw the game” in a subtle but significant way (i.e. make a simple mistake at a pivotal point). This is actually fairly easy to do assuming you have an adequate representation of the scenario, and computer games are virtually always blessed with omniscience. What’s most interesting is that this is effectively scaffolding in the Vygotskian sense, with the AI opponent acting as a guide in the player’s skill development. If the AI is aware of the skill-level of the player (and not in the gross easy/medium/hard sense), perhaps through a model tracing mechanism, it can tune its behavior dynamically to provide just enough challenge. A technique that has been used in cognitive tutors for quite some time now.

The author also points out the utility (and failings) of reducing the accuracy of the AI’s information. This particular issue has always stuck in my craw as a gamer and as a psychologist. Perfect information is an illusion that can only exist in low-fidelity approximations of a system. Ratchet up that fidelity and the inherent noise in the system starts to become evident. Humans are quite at home with uncertainty (or we just ignore it entirely at the perceptual level). One of the easiest ways to dumb down an AI is to give it the same limitations that we have, but don’t impose new artificial limitations. It’s not about probabilistically ignoring the opponent’s last move, but rather not letting it see past the fog of war in the first place. Don’t add small random noise to the pool shot trajectory, rather make it line up the shot as we do, with perceptual tricks & extrapolated imaginal geometries.

Cognitive science would dumb down the AI not by introducing noise, clever game throwing, or similar crippling, but by introducing the same limitations that humans possess. The limitations of perception, action, memory, attention, and skill are what make us the adaptable agents that we are. All of this is just as a point of comparison. Cognitive modeling is still more research than application (with some notable exceptions). However, I can see a near-term future where game developers focus on developing human-like opponents not through clever programming, but through an actual focus on how the human actually plays.