June 4, 2009: 3:53 pm: Posters, Publications

Harrison, A.M., & Trafton, J.G. (2009) Gaze-following and awareness of another’s perspective in chimpanzees. In A. Howes, D. Peebles, R. Cooper (Eds.), 9th International conference on cognitive modeling – ICCM2009, Manchester, UK.

March 18, 2009: 12:12 pm: Cognitive Modeling, Errata

I just finished a little article on the motivation and methods of dumbing down game AIs. It’s particularly interesting in that it makes for a good case in point regarding how cognitive science and traditional AI differ.

The article starts off by commenting on the challenges of less-than-perfect AIs, which is interesting in its own right. Traditional AI is often concerned with the optimal, deterministic, and efficient solutions to a given scenario. As a cognitive psychologist, I’m more concerned with the solution that best matches human performance. And it is this focus on human-like performance that dumbing down the AIs is all about.

The first possible resolution presented (and summarily dismissed) is to reduce the amount of computation performed. This rarely works as it results in completely idiotic behavior that even novices would be loath to exhibit. From a cognitive modeling standpoint, novice/expert distinctions are primarily represented as knowledge base, skill, and learning differences – but the total computational time is relatively unaffected. Novices are novices not because they can’t think as long and as hard about a problem, but because they lack the relevant strategies, experience, and learned optimizations.

Instead, the author argues, AIs should “throw the game” in a subtle but significant way (i.e. make a simple mistake at a pivotal point). This is actually fairly easy to do assuming you have an adequate representation of the scenario, and computer games are virtually always blessed with omniscience. What’s most interesting is that this is effectively scaffolding in the Vygotskian sense, with the AI opponent acting as a guide in the player’s skill development. If the AI is aware of the skill-level of the player (and not in the gross easy/medium/hard sense), perhaps through a model tracing mechanism, it can tune its behavior dynamically to provide just enough challenge. A technique that has been used in cognitive tutors for quite some time now.

The author also points out the utility (and failings) of reducing the accuracy of the AI’s information. This particular issue has always stuck in my craw as a gamer and as a psychologist. Perfect information is an illusion that can only exist in low-fidelity approximations of a system. Ratchet up that fidelity and the inherent noise in the system starts to become evident. Humans are quite at home with uncertainty (or we just ignore it entirely at the perceptual level). One of the easiest ways to dumb down an AI is to give it the same limitations that we have, but don’t impose new artificial limitations. It’s not about probabilistically ignoring the opponent’s last move, but rather not letting it see past the fog of war in the first place. Don’t add small random noise to the pool shot trajectory, rather make it line up the shot as we do, with perceptual tricks & extrapolated imaginal geometries.

Cognitive science would dumb down the AI not by introducing noise, clever game throwing, or similar crippling, but by introducing the same limitations that humans possess. The limitations of perception, action, memory, attention, and skill are what make us the adaptable agents that we are. All of this is just as a point of comparison. Cognitive modeling is still more research than application (with some notable exceptions). However, I can see a near-term future where game developers focus on developing human-like opponents not through clever programming, but through an actual focus on how the human actually plays.

December 3, 2008: 11:50 am: Cognitive Modeling, jACT-R, Robotics

ACT-R’s manual-motor system (derived from EPIC) is really starting to show its limitation as we push on it within the embodied robotics domain. I’ve commented elsewhere regarding the a more general implementation of a motor system (not just hands), but that has been falling short. While the future certainly holds radical changes for the perceptual/motor systems, there is still the need for short-term fixes that don’t radically change the architecture.

One such fix that I’ve been playing with recently has been compound motor commands. In jACT-R (and ACT-R proper), motor commands are described by the motor/manual module and their execution is handled by the underlying device. This limits the modeler to those commands and they must be managed at the production level. Typically this requires some measure of goal involvement, as reflex-style productions (i.e. no goal, imaginal, or retrieval) often don’t carry sufficient information to evaluate their appropriateness. Compound motor commands address this by allowing modelers to define their own motor commands that delegate to the primitive commands available. These compound commands can be added to the motor buffer (which will actually contain them), allowing reflex-style productions to merely match the contents of the motor buffer to control the flow of motor execution.

Pursue-command

The following compound command takes a configural identifier, which allows it to reference a specific spatial representation in the configural buffer. It uses this information to direct turn and walk commands (provided by PlayerStageInterface) in order to approach the target.

(chunk-type pursue-target-command (:include compound-motor-command)

( configural-id nil ) ;; who’s our target

( distance-tolerance 0.2 );; get w/in 20cm of target

( angular-tolerance 5 ) ;; get the target w/in 10 deg arc

( state busy ) ;; command, not module state

( remove-on-complete t ) ;; when complete -motor

( no-configural-is-error nil )) ;;empty configural buffer should be an error

There are then seven simple reflex-style productions that turn and move the model towards the target. That set even includes an error recovery (which is incredibly important if you’re actually acting in an environment):

(p pursue-target-attempt-recovery

   =motor>

   isa pursue-target-command

   ?motor>

state error ;; module, not command

   ==>

=motor>

   state error ;; command, not module

  

+motor>

isa walk

   distance -0.25 ;;jump back

)

This reusable compound command and its productions are used in multiple models by merely making a +motor request after verifying that the motor buffer is empty and free:

(p pursuit-attend-succeeded-match

   =goal>

   isa pursue

step searching

target =target

   =visual>

   isa visual-object

token =target

   =configural>

   isa configural

identifier =target

center-bearing =bearing

   ?motor>

   – state busy

buffer empty

   ==>

+motor>

isa pursue-target-command

   configural-id =target

  

=configural>

  

=visual>

  

+goal>

isa forage

)

This mechanism carries with it a handful of useful characteristics beyond giving modelers a higher-level of motor control and abstraction.

Perception-Action Loops

With the exception of the mouse movement command, all motor commands in ACT-R are decoupled from perception. At the lowest level this is a good thing (albeit a challenge: using radians to control key targets for fingers?). However, there is ample evidence that perception and action are tightly coupled. The previous example establishes an explicit link between a to-be-monitored percept and an action. A similar mechanism could be used for the monitoring used to control steering in driving. I’m currently working on similar commands to keep our new robots continuously fixated on the object of visual-attention, complete with moving eyes, head, and body. When our visual system is able to recognize the robot’s own hands, guided reaching becomes a difference reduction problem between the hand’s and target’s spatial representations.

Parameterized commands

The previous example uses two slot-based parameters to control the execution of the compound command. To the extent that they are propagated to the underlying primitive commands, ACT-R’s underlying learning mechanisms present possible avenues for a model to move from motor babbling to more precise control.

Further Separation of Concerns

One of my underlying principles in modeling is to separate out concerns. One aspect of this is trimming productions and goal states to their bare minimum, permitting greater composition of subsequent productions. Another aspect is the generalization of model components to maximize reuse. Compound commands permit the motor system to access limited state information (i.e. what spatial percept to track), offloading it from the goal or task state structures, simultaneously simplifying and increasing reusability.

This quick modification has dramatically simplified our modeling in complex, embodied environments. It is, in principle, consistent with canonical ACT-R. The only change necessary is to allow the motor buffer to contain a chunk (as it currently does not). In terms of short-term fixes, this has some serious bang-for-the-buck.

This change has already been made in the current release of jACT-R, should anyone want to play with them.

December 1, 2008: 10:06 am: Publications

Kennedy, W.G., Bugajska, M.D., Harrison, A.M., & Trafton, J.G. (2009) “Like-me” Simulation as an Effective and Cognitively Plausible Basis for Social Robotics. International journal of social robotics. (Paper)

October 2, 2008: 3:26 pm: ACT-R/S, Cognitive Modeling, Research, Spatial Reasoning

The past month has seen me up to my eye-balls in spatial modeling. I’ve been blasting out models and exploring parameter spaces. I’ve been doing all of this to get an ACT-R/S paper out the door (crazy, I know). I’ve got a single model that can accomplish two different spatial tasks across two different experiments. However, fitting the two simultaneously looks impossible. Inevitably this is due to mistakes with the model and the theory, but how much of each?

Is it a serious theoretical failing that I can’t zero-parameter fit the second experiment? Given how often modelers twiddle parameters between experiments, I doubt this. However, I’m proposing an entirely new module – new functionality. The burden of proof required for such an addition pushes me towards trying to do even more – perhaps too much.

After much head-bashing (it feels so good when you stop), and discussion, I’ve decided to split the paper in two. Submit the first experiment/model ASAP, and let the model and theory issues surrounding the second percolate for a few months. While this doesn’t meet my module-imposed higher-standards, it does have the added benefit of being penetrable to readers. The first experiment was short, sweet, with a cleanly modeled explanation. It makes an ideal introduction to ACT-R/S. Adding the second experiment (with judgments of relative direction) would have been far too much for all but the most extreme spatial modeler (as many of those as there are).

I just have to try to put the second experiment out of my mind until the writing is done… easier said than done.

September 18, 2008: 11:32 am: Employment

Post-doctoral Fellow (Advisor: J. Gregory Trafton). Continued examining the form and function of human spatial reasoning by embedding spatial cognitive models into mobile robotics platforms. Designed and developed a series of models to explore the effect of robots with human cognitive capabilities on human-robot interaction. Leveraged prior experience with simulation and cognitive modeling to develop the high-level control system for the current generation of MDS robots. 2007-2010.

September 17, 2008: 11:26 am: Employment

Graduate Researcher (Advisor: Christian Schunn). Designed and conducted a series of studies exploring errors in human spatial reasoning. Methodologies included in-personal & internet-based behavioral experiments and virtual-reality fMRI. Findings were used to refine ACT-R/S, the first functional theory of spatial cognition within an established cognitive architecture. 2001-2007.

September 16, 2008: 10:42 am: Employment

Graduate Researcher (Advisor: Christian Schunn). Designed and conducted a series of experiments examining the transfer of logical skills across domains in expert and novice populations. Launched jACT-R and CommonReality open source projects to facilitate higher fidelity models of human cognition. Began the initial formulation and implementation of a general computational theory of human spatial reasoning (SECS & ACT-R/S). 1999-2001.

September 15, 2008: 10:30 am: Projects

Primary architect and developer for the open source Java implementation of the ACT-R cognitive architecture. This implementation of ACT-R leverages contemporary design principles to produce a simulation system that is loosely coupled and highly scalable, enabling higher fidelity real-time simulations.

: 10:28 am: Employment

System Analyst & Programmer. Principle designer and developer for an internet based planning and tutoring suite. 1997 (Summer Co-op).

« Previous PageNext Page »