Archive for December, 2008

December 3, 2008: 11:50 am: Cognitive Modeling, jACT-R, Robotics

ACT-R’s manual-motor system (derived from EPIC) is really starting to show its limitation as we push on it within the embodied robotics domain. I’ve commented elsewhere regarding the a more general implementation of a motor system (not just hands), but that has been falling short. While the future certainly holds radical changes for the perceptual/motor systems, there is still the need for short-term fixes that don’t radically change the architecture.

One such fix that I’ve been playing with recently has been compound motor commands. In jACT-R (and ACT-R proper), motor commands are described by the motor/manual module and their execution is handled by the underlying device. This limits the modeler to those commands and they must be managed at the production level. Typically this requires some measure of goal involvement, as reflex-style productions (i.e. no goal, imaginal, or retrieval) often don’t carry sufficient information to evaluate their appropriateness. Compound motor commands address this by allowing modelers to define their own motor commands that delegate to the primitive commands available. These compound commands can be added to the motor buffer (which will actually contain them), allowing reflex-style productions to merely match the contents of the motor buffer to control the flow of motor execution.

Pursue-command

The following compound command takes a configural identifier, which allows it to reference a specific spatial representation in the configural buffer. It uses this information to direct turn and walk commands (provided by PlayerStageInterface) in order to approach the target.

(chunk-type pursue-target-command (:include compound-motor-command)

( configural-id nil ) ;; who’s our target

( distance-tolerance 0.2 );; get w/in 20cm of target

( angular-tolerance 5 ) ;; get the target w/in 10 deg arc

( state busy ) ;; command, not module state

( remove-on-complete t ) ;; when complete -motor

( no-configural-is-error nil )) ;;empty configural buffer should be an error

There are then seven simple reflex-style productions that turn and move the model towards the target. That set even includes an error recovery (which is incredibly important if you’re actually acting in an environment):

(p pursue-target-attempt-recovery

   =motor>

   isa pursue-target-command

   ?motor>

state error ;; module, not command

   ==>

=motor>

   state error ;; command, not module

  

+motor>

isa walk

   distance -0.25 ;;jump back

)

This reusable compound command and its productions are used in multiple models by merely making a +motor request after verifying that the motor buffer is empty and free:

(p pursuit-attend-succeeded-match

   =goal>

   isa pursue

step searching

target =target

   =visual>

   isa visual-object

token =target

   =configural>

   isa configural

identifier =target

center-bearing =bearing

   ?motor>

   – state busy

buffer empty

   ==>

+motor>

isa pursue-target-command

   configural-id =target

  

=configural>

  

=visual>

  

+goal>

isa forage

)

This mechanism carries with it a handful of useful characteristics beyond giving modelers a higher-level of motor control and abstraction.

Perception-Action Loops

With the exception of the mouse movement command, all motor commands in ACT-R are decoupled from perception. At the lowest level this is a good thing (albeit a challenge: using radians to control key targets for fingers?). However, there is ample evidence that perception and action are tightly coupled. The previous example establishes an explicit link between a to-be-monitored percept and an action. A similar mechanism could be used for the monitoring used to control steering in driving. I’m currently working on similar commands to keep our new robots continuously fixated on the object of visual-attention, complete with moving eyes, head, and body. When our visual system is able to recognize the robot’s own hands, guided reaching becomes a difference reduction problem between the hand’s and target’s spatial representations.

Parameterized commands

The previous example uses two slot-based parameters to control the execution of the compound command. To the extent that they are propagated to the underlying primitive commands, ACT-R’s underlying learning mechanisms present possible avenues for a model to move from motor babbling to more precise control.

Further Separation of Concerns

One of my underlying principles in modeling is to separate out concerns. One aspect of this is trimming productions and goal states to their bare minimum, permitting greater composition of subsequent productions. Another aspect is the generalization of model components to maximize reuse. Compound commands permit the motor system to access limited state information (i.e. what spatial percept to track), offloading it from the goal or task state structures, simultaneously simplifying and increasing reusability.

This quick modification has dramatically simplified our modeling in complex, embodied environments. It is, in principle, consistent with canonical ACT-R. The only change necessary is to allow the motor buffer to contain a chunk (as it currently does not). In terms of short-term fixes, this has some serious bang-for-the-buck.

This change has already been made in the current release of jACT-R, should anyone want to play with them.

December 1, 2008: 10:06 am: Publications

Kennedy, W.G., Bugajska, M.D., Harrison, A.M., & Trafton, J.G. (2009) “Like-me” Simulation as an Effective and Cognitively Plausible Basis for Social Robotics. International journal of social robotics. (Paper)