jACT-R


December 3, 2008: 11:50 am: Cognitive Modeling, jACT-R, Robotics

ACT-R’s manual-motor system (derived from EPIC) is really starting to show its limitation as we push on it within the embodied robotics domain. I’ve commented elsewhere regarding the a more general implementation of a motor system (not just hands), but that has been falling short. While the future certainly holds radical changes for the perceptual/motor systems, there is still the need for short-term fixes that don’t radically change the architecture.

One such fix that I’ve been playing with recently has been compound motor commands. In jACT-R (and ACT-R proper), motor commands are described by the motor/manual module and their execution is handled by the underlying device. This limits the modeler to those commands and they must be managed at the production level. Typically this requires some measure of goal involvement, as reflex-style productions (i.e. no goal, imaginal, or retrieval) often don’t carry sufficient information to evaluate their appropriateness. Compound motor commands address this by allowing modelers to define their own motor commands that delegate to the primitive commands available. These compound commands can be added to the motor buffer (which will actually contain them), allowing reflex-style productions to merely match the contents of the motor buffer to control the flow of motor execution.

Pursue-command

The following compound command takes a configural identifier, which allows it to reference a specific spatial representation in the configural buffer. It uses this information to direct turn and walk commands (provided by PlayerStageInterface) in order to approach the target.

(chunk-type pursue-target-command (:include compound-motor-command)

( configural-id nil ) ;; who’s our target

( distance-tolerance 0.2 );; get w/in 20cm of target

( angular-tolerance 5 ) ;; get the target w/in 10 deg arc

( state busy ) ;; command, not module state

( remove-on-complete t ) ;; when complete -motor

( no-configural-is-error nil )) ;;empty configural buffer should be an error

There are then seven simple reflex-style productions that turn and move the model towards the target. That set even includes an error recovery (which is incredibly important if you’re actually acting in an environment):

(p pursue-target-attempt-recovery

   =motor>

   isa pursue-target-command

   ?motor>

state error ;; module, not command

   ==>

=motor>

   state error ;; command, not module

  

+motor>

isa walk

   distance -0.25 ;;jump back

)

This reusable compound command and its productions are used in multiple models by merely making a +motor request after verifying that the motor buffer is empty and free:

(p pursuit-attend-succeeded-match

   =goal>

   isa pursue

step searching

target =target

   =visual>

   isa visual-object

token =target

   =configural>

   isa configural

identifier =target

center-bearing =bearing

   ?motor>

   – state busy

buffer empty

   ==>

+motor>

isa pursue-target-command

   configural-id =target

  

=configural>

  

=visual>

  

+goal>

isa forage

)

This mechanism carries with it a handful of useful characteristics beyond giving modelers a higher-level of motor control and abstraction.

Perception-Action Loops

With the exception of the mouse movement command, all motor commands in ACT-R are decoupled from perception. At the lowest level this is a good thing (albeit a challenge: using radians to control key targets for fingers?). However, there is ample evidence that perception and action are tightly coupled. The previous example establishes an explicit link between a to-be-monitored percept and an action. A similar mechanism could be used for the monitoring used to control steering in driving. I’m currently working on similar commands to keep our new robots continuously fixated on the object of visual-attention, complete with moving eyes, head, and body. When our visual system is able to recognize the robot’s own hands, guided reaching becomes a difference reduction problem between the hand’s and target’s spatial representations.

Parameterized commands

The previous example uses two slot-based parameters to control the execution of the compound command. To the extent that they are propagated to the underlying primitive commands, ACT-R’s underlying learning mechanisms present possible avenues for a model to move from motor babbling to more precise control.

Further Separation of Concerns

One of my underlying principles in modeling is to separate out concerns. One aspect of this is trimming productions and goal states to their bare minimum, permitting greater composition of subsequent productions. Another aspect is the generalization of model components to maximize reuse. Compound commands permit the motor system to access limited state information (i.e. what spatial percept to track), offloading it from the goal or task state structures, simultaneously simplifying and increasing reusability.

This quick modification has dramatically simplified our modeling in complex, embodied environments. It is, in principle, consistent with canonical ACT-R. The only change necessary is to allow the motor buffer to contain a chunk (as it currently does not). In terms of short-term fixes, this has some serious bang-for-the-buck.

This change has already been made in the current release of jACT-R, should anyone want to play with them.

June 19, 2008: 1:41 pm: Big Ideas, Cognitive Modeling, jACT-R

As usual after a day of writing, I needed to take a break. I decided to watch an old webinar on the Eclipse communications project. Why does a psychologist/roboticist care about a platform specific communications system? Aside from the possibilities of leveraging others work on shared editing, or even chat/IM within the IDE (great for contacting me if you’ve got a question), it also opens the door to more effective distributed model execution.

As cognitive modelers we routinely have to run thousands of model iterations in order to collect enough data to do effective quantitative fits. For simple models, the time cost is negligible, but larger models can serious tax computation resources for quite sometime. My dissertation experiments lasted around two hours, and the model runs took almost 15 minutes per simulated subject. Given the parameter space I had to explore, it was common for my machine to be bogged down for days on end. My solution at the time was to VNC into other machines in the lab, update the model from SVN, then run a set of parameters. Not the most effective distribution mechanism.

Wouldn’t it be better if you could do all this from the IDE without any hassle? Imagine if you could specify the bulk model executions and then farm it out to a set of machines without any heavy lifting or sacrificing your processor cycles. The combination of OSGi bundles (which all jACT-R models are), Eclipse’s PDE, and ECF makes this possibility a very near reality (p2 will definitely help too as it will make enforcing consistent bundle states much easier).

After watching the webinar I couldn’t resist and started building the pieces. Here’s how the bad-boy will work:

  1. define the run configuration for the iterative model run
  2. then select the remote tab which will list all the discovered service capable of accepting the model run (pruned based on dependencies of your model)
  3. the model and all its dependencies is exported to a zip file and sent to the remote service
  4. the remote service unpacks, imports and compiles the content (ensuring that all the deps are met and your code will actually run) – it then executes the run configuration
  5. as the execution progresses, the service transmits the state information back to your IDE (i.e. iteration #, ETA, etc)
  6. when all is done, it packages up the working directory that the model was run in and sends it back
  7. this is then expanded into your runs/ directory as if you had executed the bulk run yourself.

This is actually a sloppy implementation of a more general functionality that Eclipse might find useful: transmitting local bundles and invoking them on a remote system.

I’ve got a weekend away from distractions coming up and I think I can get a rough implementation working by then. This will certainly make the bulk monkey runs go so much easier (remote desktop is usable but really just too much of a hassle). Of course, I could use that time to work on the spatial modeling instead.. but that’s too much like real work for a weekend.

May 6, 2008: 3:23 pm: Errata, jACT-R, Robotics

I am. I am.

Surprise, surprise, my previous rant was completely unjustified. Turns out it was my own stupid fault. I was programmatically launching player, unfortunately I forgot to harvest the program’s stdout and stderr. The process’s buffers were flooded. Stooopid Ediot!

But, at least the monkey sims can run for much longer now. And not a moment too soon as the robot lab is rapidly filling with interns. (It’s such a nice change of pace to no longer be on the bottom of the totem pole)

May 5, 2008: 3:59 pm: Errata, jACT-R, Robotics

I love division of labor, it saves us all from having to reinvent the wheel.. but sometimes it just drives me insane.

I’ve hooked up player/stage through a java client, enabling jACT-R (and the monkey models) to interact in the simulated robotic environment. For sometime now I’ve had a bug where stage would freeze after around four minutes of simulated time. I initially thought it was due to my unorthodox use of the client (the threading model in the client is a tad weak), so I reverted it back to the normal usage and the simulations were now running past four minutes with no problem. Yay, problem fixed!

Nope. Now it dead locks around 6 minutes. So maybe it’s a problem with stage? Lo and behold, someone else was encountering a similar problem. The recommended fix? Upgrade to playerstage 2.1, which is, of course, not support by the client yet and no one has posted any word on an ETA.

I am able to detect when it occurs, which lead me to believe that I’d be able to disconnect the clients and reconnect. Unfortunately, the only way to resurrect stage at this point is to completely kill the socket owning process. My only option is to not only disconnect the clients, but then force quit stage and restart it.. Ick!

I think it’s time to work on something else for a little while.

* This is not meant as a disparagement of anyone’s work, rather just me venting. I fully acknowledge that the same statements could be- (and probably have been-) applied to my own work.

April 16, 2008: 5:03 pm: Errata, jACT-R

After finally getting the movement tolerances working the jACT-R so that the robo-monkeys could finally see each other as they move around, I came upon a motor bug. Actually, I’m still trying to hunt down the exact circumstances under which it occurs. It’s particularly challenging because it involves the interaction of the robot/simulation sending a movement status and colliding with the model’s desire for things to complete in a predictable amount of time.

Since this bug is so hard to track down given the current tools, I decided it was about time to implement a long desired feature. In my experience, and the reports of others supports that, most of us just run the models and when something goes wrong we dig through the trace. Only as a last resort do we ever really step into the models. (Basically, all that work I did to integrate break-points and a debugger were for naught :) )

However, once we’ve found where something went wrong we immediately want to know what fired (easy enough from the trace) and what the state and contents of the buffers were (not so easy). The log view jACT-R has provided is good, but not great. The tabular format and filters makes it easier to ignore irrelevant info, but you still don’t get a clear picture of the buffer contents. To rectify this I’ve added the ability to dynamically modify the tooltip for the log viewer. Combined with the buffer event listener and the runtime tracer, the log view can now display the full contents of any buffer as well as its flag values both before and after conflict resolution.

Buffer content tooltip

The buttons on the tooltip allow you to toggle between seeing the buffer contents before and after conflict resolution. It’s not completely correct right now in two regards: some state information may be lost at the start and end of a run (i.e. your model starts with buffers pre-stuffed or the runtime terminates before sending the last bits of info), and the changes to the chunks while in the buffer are not being tracked. The first issue I don’t care about, but the second will be addressed soon.

This added information does carry with it a moderate performance penalty so I’ll be including it as a runtime configuration option. A little later I will also add tooltip information for the declarative (i.e. visualization of encoded chunks) and procedural (i.e. fired production and resolved instantiation, encoded productions) module traces.

But for now, I’m quite happy and this is making tracking down that spurious motor.state=error soooo much easier.

March 27, 2008: 6:11 pm: jACT-R

It took longer than expected (what doesn’t?), but the update to MINA 2 has been completed. MINA provides the underlying communications infrastructure between jACT-R and CommonReality (which provides the simulation brokering). The previous version had an out-of-order bug that took me forever to track down. I figured it was my code and my liberal use of threads, but it turned out to be a MINA issue. Fortunately, the 2.0 release (M2) has a more consistent threading model that functions much better.

While initial runs showed that it actually ran slower, subsequent runs have revealed that was due to the short duration of those early tests. Long term executions, such as my handedness model, are hovering around the original performance numbers (~ 50-70x realtime). No complaints from me.

The update required significant rejiggering of the state and connection management which should now allow participants to come in and out at anytime during the simulation (assuming none is the exclusive owner of a clock). I can now return to examining some model fits, further monkey refinements, perhaps get my dissertation into a publishable form? Craziness.

The next stage in the monkey modeling will incorporate some new features we’ve been playing with at the lab: gaze following. Aside from being relevant to developmentalists, gaze following presents a useful and cheap perspective-taking surrogate in the absence of more advanced processing. From my current meta-cogitating perspective, this also provides a useful starting point for goal inferencing.

But before I do that.. I really need to fix the movement tolerances in the visual system..

March 5, 2008: 3:20 pm: Cognitive Modeling, jACT-R, Robotics

Last week marked the first major deadline that I’ve had here at NRL. The boss man was presenting our work at a social cognition workshop that was populated by cognitive and developmental psychologists plus some roboticists. From his report, it was an interesting interchange.

Our push leading up to it were some model fits of the monkey models plus demos of the robot running the models. The fits didn’t happen (bug in common reality that I’m working on currently), but the movies did (and should be posted soon). The final push towards the deadline has highlighted a few architectural divergences between jACT-R and ACT-R proper that I need to address. Those differences that are clearly mistakes will be patched, but those that are architectural decisions will likely be parameterized.

The post-workshop debriefing has produced some interesting discussions around here regarding the nature of theory of mind and meta-cognition more generally. Some of my early brain-storming regarding concurrent protocols seems like it really will set the stage for a general meta-cognitive capacity. Production compilation, threaded cognition, and a carefully designed declarative task description can get us really close to this goal. However, I suspect there are two pieces that need to be developed: variablized slot names and the ability to incrementally assemble a chunk specification (i.e. module request). I throw out this tantalizing bit with the promise that I will post a very lengthy consideration of this issue in the immediate future (I need to put together the pieces that currently exist and see how it plays out).

My insomnia development of the production static analyzer and visualizer is coming along nicely. Below are two screenshots from the analysis of the handedness model (which uses perspective taking to figure out which hand another person is holding up). The first shows all the positively related productions and their directions (i.e. which production can possibly follow which), with the selected production in yellow and its possible predecessors in blue and successors in green. The icons in the top-right corner show the filters for showing all, sequence, previous and next productions, plus positive, negative and ambiguous relationships. There are also configurations for layouts and zoom levels. Double clicking the production will switch the view to sequence which filters out all the other productions that are not directly related to the focus, across a configurable depth (1 currently).

All productions visualized Focus on the selected

I’m still not handling chunktype relationships, and I also need to provide buffer+chunktype transformations (i.e. +visual> isa move-attention doesn’t place a move-attention into the visual buffer, but a visual-object instead). Once those are in place, I’ll add a chunktype filter to the visualizer so that you can focus on all productions that depend upon a specific chunktype, which will help really complex models (the monkey model, with all of its contingency and error handling is around 90 productions, all nicely layered by hierarchical goals, but that’s still a boat load of productions to have to visualize at once)

I’m hoping to get a bunch of this menial development and fixes done on the flight to/fro Amsterdam for HRI. If all goes well, there should be a groovy new release in two weeks.

December 13, 2007: 7:55 am: Cognitive Modeling, jACT-R

There’s been an idea percolating in the back of my mind for the past few weeks. It all started with the boss’s boss, Alan Schultz’s desire for a cognitively plausible mechanism for robots to explain what (and presumably why) they are doing what they are doing. While for Alan this was a practical desire, it struck me that from a cognitive psychology perspective what he was asking for was actually much deeper and more interesting.

What he really wanted was to have robots engage in concurrent verbal protocols (Ericsson & Simon, 1993). This is a general task that asks people to describe what they are doing while they are doing it with as little filtering or elaboration as possible. The idea is that these utterances represent the most basic description of the contents of working memory in service of the current goal. A great introspective tool (more so that retrospective protocols at least).

From a cognitive modeling perspective the challenge is to implement it in a general manner so that one can use the same productions across multiple models, regardless of goal structures. This means that any model of concurrent protocols should not be dependent upon interruptions of the current goal, but should operate in parallel (hence concurrent).

I do believe that I have a solution. Not surprisingly it relies upon one of my favorite contributed ACT-R modules, threaded cognitiion. Combined with the learning from instruction work that John and Niels have done, an interesting new system emerges.

Extending threaded cognition

Threaded cognition allows a model to engage in multiple goals at once, interleaving them at the production level. However, there is a level of isolation that prevents individual goals from being aware of each other (a good thing). For concurrent verbal protocols to work, there needs to be a mechanism to get at the other goal(s). A modest proposal is to add a query such as this:

(p what-else-am-i-doing
  =goal>
    isa verbal-protocol
    ....
  ?goal>
    other-goal =otherGoal
  ==>
   +retrieval>
    =otherGoal
   !output! (Im also doing =otherGoal))

The example is idiotic, but the idea is that when querying the goal buffer, you can request a reference to the other goal, which will bind to any goal in the buffer that is not the one that fired this production. With the other goal in hand, one can then begin to inspect it.

Learning from instruction

If the goal was learned via the learning from instruction work, you’d have a goal with a state slot. This state slot can then be used to retrieve from declarative memory the instructions that preceded and succeeds this one. In other words, you can launch a retrieval which will describe what it is doing. If one extended the learning from instruction work to include meta information for the justification for that instruction, you now have the ability to query the model for what and why it is doing what it is doing.

Levels of description

Of course, this will likely result in protocols that are very fine in their granularity. The states from learning from instruction are at the unit task level (i.e. reading a letter, pressing a button). This is not really that useful. If one extended the learning from instruction work to include progressively deepening goals (ala standard task analyses), the concurrent verbal protocols could then prefer to report at the level just above unit tasks:

(p avoid-unit-tasks
  =goal>
   isa verbal-protocol
   ...
  =retrieval>
   isa unit-task
   parent =parent
 ==>
  +retrieval>
   =parent)

One could then also query the model for more or less detail where it would then chain up or down the goal.

Beyond just a tool

In discussing this brainstorming, Greg Trafton realized that it may be more than just a damned useful tool. There may be genuine predictions and fits that can be evaluated with respect to the verbal protocol methodologies. Not only does this solution present a potential multi-tasking cost to performance of the primary goal, but the simple act of engaging in the protocols would change the state of the model.

With production compilation engaged, repeated protocols would become much faster as the instruction retrievals are compiled out (just as they were when learning how to do the core task). But more interestingly, the act of accessing the other goal via the retrieval buffer would lay down a memory trace for the partially completed goals. Without the concurrent protocol, the intermediate steps of the goal would be lost (or rather, never encoded). These partial goals may make further explanations easier, strengthen the instructions, provide further compilation opportunities, and (potentially) result in the improved learning that is seen when subjects are asked to explain their actions while learning a new task.

One could also look at the different predictions regarding concurrent and retrospective protocols.

What started as a potential solution to a practical problem, when situated within ACT-R, is now looking like a genuine theoretical construct that may get some serious mileage. Gotta love it when an idea germinates into something so much greater than expected.

November 29, 2007: 12:25 pm: jACT-R

Alrighty, the first functional commit of the new general motor module is in place. I have to say, I’m quite pleased with it. The implementation is general enough that it can be easily extended or adapted without having to fuss with threading or common reality interfacing. In fact I will be refactoring the vocal module to use it as well (although as a separate module).

The implementation includes a basic keyboard/mouse device on the common reality side (although the mouse elements are sketchy). While not all the motor commands are currently implemented (only punch, peck, peck-recoil), adding the remaining should be relatively straight forward. [code is in the repository, I have not uploaded to the update site yet]

In terms of major basic functionality gaps on the /PM side, there is just the missing SwingSensor that makes java guis visually available. I have no idea when I'll get to that (any volunteers?) as my next focus is the embedded work I'm doing here at NRL.

The current goal is to get cracking on the compatibility tests over the winter holiday. Ideally, I'd like to have a fully verified compatible version for the next workshop this summer.

November 14, 2007: 12:37 pm: Cognitive Modeling, jACT-R

Last week the basic robotic visual system interface with jACT-R was finished and running fairly well (a tad inefficient, but.. whatever). Before launching into the final piece, motor control, I had to implement a long neglected issue: how to communicate efferent information from jACT-R to CommonReality.

Well, the extended weekend gave me some quality pondering time and yesterday I was able to whip up a viable solution. As a quick test-bed for it, I decided to implement a more general sensor/module pair (since motor control is necessarily yoked to a specific device) : the speech system.

There is now a general purpose speech system (org.commonreality.sensors.speech.DefaultSpeechSensor) and jACT-R’s vocal module (org.jactr.modules.pm.vocal.six.DefaultVocalModule6). Rock on.

Up next, some basic motor control and then I can get back to migrating the dissertation model to the robotics platform and then start carving up the dissertation for publication.

Next Page »