Archive for November, 2007

November 29, 2007: 12:25 pm: jACT-R

Alrighty, the first functional commit of the new general motor module is in place. I have to say, I’m quite pleased with it. The implementation is general enough that it can be easily extended or adapted without having to fuss with threading or common reality interfacing. In fact I will be refactoring the vocal module to use it as well (although as a separate module).

The implementation includes a basic keyboard/mouse device on the common reality side (although the mouse elements are sketchy). While not all the motor commands are currently implemented (only punch, peck, peck-recoil), adding the remaining should be relatively straight forward. [code is in the repository, I have not uploaded to the update site yet]

In terms of major basic functionality gaps on the /PM side, there is just the missing SwingSensor that makes java guis visually available. I have no idea when I'll get to that (any volunteers?) as my next focus is the embedded work I'm doing here at NRL.

The current goal is to get cracking on the compatibility tests over the winter holiday. Ideally, I'd like to have a fully verified compatible version for the next workshop this summer.

November 14, 2007: 12:37 pm: Cognitive Modeling, jACT-R

Last week the basic robotic visual system interface with jACT-R was finished and running fairly well (a tad inefficient, but.. whatever). Before launching into the final piece, motor control, I had to implement a long neglected issue: how to communicate efferent information from jACT-R to CommonReality.

Well, the extended weekend gave me some quality pondering time and yesterday I was able to whip up a viable solution. As a quick test-bed for it, I decided to implement a more general sensor/module pair (since motor control is necessarily yoked to a specific device) : the speech system.

There is now a general purpose speech system (org.commonreality.sensors.speech.DefaultSpeechSensor) and jACT-R’s vocal module (org.jactr.modules.pm.vocal.six.DefaultVocalModule6). Rock on.

Up next, some basic motor control and then I can get back to migrating the dissertation model to the robotics platform and then start carving up the dissertation for publication.

November 5, 2007: 12:30 pm: Cognitive Modeling, Robotics, Spatial Reasoning

One thing I really like about working here at NRL is that they bring in researchers. It’s just like university research where you can actually keep abreast of current work. This morning we had David Kieras, father of EPIC, talk about some issues surrounding active visual systems.

One of the major differences between EPIC and ACT-R is that the former focuses much more heavily on embodied cognition and the interactions between perception and action (ACT-R’s embodiment system was largely modeled on EPIC’s).

Not surprisingly, one of the big issues with cognitive robotics is the perception/action link and how far up/down do we model it. Taking driving for instance, or rather, the simpler task of staying on the road. One could postulate a series of complex cognitive operations taking speed and position into account in order to adjust the steering wheel appropriately. Early drivers seem to do something like this, but experienced drivers rely on a much simpler solution. By merely tracking to visual points in space, one can just use an active monitoring process to make continual, minor adjustments.

In ACT-R this is modeled with rather simplistic goal-neutral productions (there are goals, but they aren’t really doing much cognitive processing) – it’s just a perception/action cycle. EPIC would accomplish it in a similar manner, but since productions fire in parallel, EPIC would be better able to interleave multiple tasks while driving (but the new threaded extension to ACT-R would accomplish something similar).

If we take embodiment seriously, then we have to ask how much of each task can (at an expert level of performance) be decomposed into these perceptual/action loops? and do these loops function as self monitoring systems without any need for conscious control (as ACT-R currently models it).

Let’s look at a visual navigation task – although navigation is probably a poor term. This is moving from one point to another where the destination is clearly visible. There isn’t much navigation, perhaps some obstacle avoidance, but certainly not “navigation” in the cognitive-mapping sense. Anyway..

A cognitive solution would be to extract the bearing and distance to the target, prepare a motor movement, and execute it. If something goes wrong, handle it. A perceptual/action loop solution would be much simpler, move towards the target, adjusting your bearing to maintain the object in the center of the FOV and stop once close enough.

The robot utilizes the first solution: it takes the bearing and distance to the target and off-loads the actual work to a SLAM-like processing module on the robot that does the navigation. Once it arrives, it notifies the model and all is good. This lets the model perform other work with no problems.. but the perceptual/action loop seems a more appropriate route. The challenge is in how much processing the model is performing. It’s work should be relatively limited so that it can perform other tasks..

Hmmm.. more thought is necessary.