Archive for March, 2008

March 27, 2008: 6:11 pm: jACT-R

It took longer than expected (what doesn’t?), but the update to MINA 2 has been completed. MINA provides the underlying communications infrastructure between jACT-R and CommonReality (which provides the simulation brokering). The previous version had an out-of-order bug that took me forever to track down. I figured it was my code and my liberal use of threads, but it turned out to be a MINA issue. Fortunately, the 2.0 release (M2) has a more consistent threading model that functions much better.

While initial runs showed that it actually ran slower, subsequent runs have revealed that was due to the short duration of those early tests. Long term executions, such as my handedness model, are hovering around the original performance numbers (~ 50-70x realtime). No complaints from me.

The update required significant rejiggering of the state and connection management which should now allow participants to come in and out at anytime during the simulation (assuming none is the exclusive owner of a clock). I can now return to examining some model fits, further monkey refinements, perhaps get my dissertation into a publishable form? Craziness.

The next stage in the monkey modeling will incorporate some new features we’ve been playing with at the lab: gaze following. Aside from being relevant to developmentalists, gaze following presents a useful and cheap perspective-taking surrogate in the absence of more advanced processing. From my current meta-cogitating perspective, this also provides a useful starting point for goal inferencing.

But before I do that.. I really need to fix the movement tolerances in the visual system..

March 5, 2008: 3:20 pm: Cognitive Modeling, jACT-R, Robotics

Last week marked the first major deadline that I’ve had here at NRL. The boss man was presenting our work at a social cognition workshop that was populated by cognitive and developmental psychologists plus some roboticists. From his report, it was an interesting interchange.

Our push leading up to it were some model fits of the monkey models plus demos of the robot running the models. The fits didn’t happen (bug in common reality that I’m working on currently), but the movies did (and should be posted soon). The final push towards the deadline has highlighted a few architectural divergences between jACT-R and ACT-R proper that I need to address. Those differences that are clearly mistakes will be patched, but those that are architectural decisions will likely be parameterized.

The post-workshop debriefing has produced some interesting discussions around here regarding the nature of theory of mind and meta-cognition more generally. Some of my early brain-storming regarding concurrent protocols seems like it really will set the stage for a general meta-cognitive capacity. Production compilation, threaded cognition, and a carefully designed declarative task description can get us really close to this goal. However, I suspect there are two pieces that need to be developed: variablized slot names and the ability to incrementally assemble a chunk specification (i.e. module request). I throw out this tantalizing bit with the promise that I will post a very lengthy consideration of this issue in the immediate future (I need to put together the pieces that currently exist and see how it plays out).

My insomnia development of the production static analyzer and visualizer is coming along nicely. Below are two screenshots from the analysis of the handedness model (which uses perspective taking to figure out which hand another person is holding up). The first shows all the positively related productions and their directions (i.e. which production can possibly follow which), with the selected production in yellow and its possible predecessors in blue and successors in green. The icons in the top-right corner show the filters for showing all, sequence, previous and next productions, plus positive, negative and ambiguous relationships. There are also configurations for layouts and zoom levels. Double clicking the production will switch the view to sequence which filters out all the other productions that are not directly related to the focus, across a configurable depth (1 currently).

All productions visualized Focus on the selected

I’m still not handling chunktype relationships, and I also need to provide buffer+chunktype transformations (i.e. +visual> isa move-attention doesn’t place a move-attention into the visual buffer, but a visual-object instead). Once those are in place, I’ll add a chunktype filter to the visualizer so that you can focus on all productions that depend upon a specific chunktype, which will help really complex models (the monkey model, with all of its contingency and error handling is around 90 productions, all nicely layered by hierarchical goals, but that’s still a boat load of productions to have to visualize at once)

I’m hoping to get a bunch of this menial development and fixes done on the flight to/fro Amsterdam for HRI. If all goes well, there should be a groovy new release in two weeks.