Spatial Reasoning


October 2, 2008: 3:26 pm: ACT-R/S, Cognitive Modeling, Research, Spatial Reasoning

The past month has seen me up to my eye-balls in spatial modeling. I’ve been blasting out models and exploring parameter spaces. I’ve been doing all of this to get an ACT-R/S paper out the door (crazy, I know). I’ve got a single model that can accomplish two different spatial tasks across two different experiments. However, fitting the two simultaneously looks impossible. Inevitably this is due to mistakes with the model and the theory, but how much of each?

Is it a serious theoretical failing that I can’t zero-parameter fit the second experiment? Given how often modelers twiddle parameters between experiments, I doubt this. However, I’m proposing an entirely new module – new functionality. The burden of proof required for such an addition pushes me towards trying to do even more – perhaps too much.

After much head-bashing (it feels so good when you stop), and discussion, I’ve decided to split the paper in two. Submit the first experiment/model ASAP, and let the model and theory issues surrounding the second percolate for a few months. While this doesn’t meet my module-imposed higher-standards, it does have the added benefit of being penetrable to readers. The first experiment was short, sweet, with a cleanly modeled explanation. It makes an ideal introduction to ACT-R/S. Adding the second experiment (with judgments of relative direction) would have been far too much for all but the most extreme spatial modeler (as many of those as there are).

I just have to try to put the second experiment out of my mind until the writing is done… easier said than done.

November 5, 2007: 12:30 pm: Cognitive Modeling, Robotics, Spatial Reasoning

One thing I really like about working here at NRL is that they bring in researchers. It’s just like university research where you can actually keep abreast of current work. This morning we had David Kieras, father of EPIC, talk about some issues surrounding active visual systems.

One of the major differences between EPIC and ACT-R is that the former focuses much more heavily on embodied cognition and the interactions between perception and action (ACT-R’s embodiment system was largely modeled on EPIC’s).

Not surprisingly, one of the big issues with cognitive robotics is the perception/action link and how far up/down do we model it. Taking driving for instance, or rather, the simpler task of staying on the road. One could postulate a series of complex cognitive operations taking speed and position into account in order to adjust the steering wheel appropriately. Early drivers seem to do something like this, but experienced drivers rely on a much simpler solution. By merely tracking to visual points in space, one can just use an active monitoring process to make continual, minor adjustments.

In ACT-R this is modeled with rather simplistic goal-neutral productions (there are goals, but they aren’t really doing much cognitive processing) – it’s just a perception/action cycle. EPIC would accomplish it in a similar manner, but since productions fire in parallel, EPIC would be better able to interleave multiple tasks while driving (but the new threaded extension to ACT-R would accomplish something similar).

If we take embodiment seriously, then we have to ask how much of each task can (at an expert level of performance) be decomposed into these perceptual/action loops? and do these loops function as self monitoring systems without any need for conscious control (as ACT-R currently models it).

Let’s look at a visual navigation task – although navigation is probably a poor term. This is moving from one point to another where the destination is clearly visible. There isn’t much navigation, perhaps some obstacle avoidance, but certainly not “navigation” in the cognitive-mapping sense. Anyway..

A cognitive solution would be to extract the bearing and distance to the target, prepare a motor movement, and execute it. If something goes wrong, handle it. A perceptual/action loop solution would be much simpler, move towards the target, adjusting your bearing to maintain the object in the center of the FOV and stop once close enough.

The robot utilizes the first solution: it takes the bearing and distance to the target and off-loads the actual work to a SLAM-like processing module on the robot that does the navigation. Once it arrives, it notifies the model and all is good. This lets the model perform other work with no problems.. but the perceptual/action loop seems a more appropriate route. The challenge is in how much processing the model is performing. It’s work should be relatively limited so that it can perform other tasks..

Hmmm.. more thought is necessary.

October 31, 2007: 5:13 pm: ACT-R/S, Spatial Reasoning

The player/stage — CommonReality bridge is coming along nicely. I was able to start on it yesterday (finally my machine is on the network), and quickly got CR controlling very basic things. Today I got the execution and threading done so that sensors can be processed in parallel. Hell, I’ve even got a test model that is attending to the objects that its seeing in the environment. Very cool.

There do appear to be a handful of issues with respect to the generation of unique simulation objects, but it shouldn’t be too challenging to resolve. Tomorrow I should be able to start considering how to make it all work with the dissertation model.

I have to say this has gone much easier than expected. I do, however, have to figure out a more robust way to handle token/type/value relationships. While I could use the blobfinder and just assign each element to a unique channel (ugh), I think a better route would be to use the fiducial simulator which can give me an identifier and an orientation. This should just be as easy as building another ISensorProcessor (much like the one for blobs) – but I’m afraid I may have to use some ghettotastic sensor fusion to get this to work.

I’m thinking I’ll turn this process into a multi-part article for jactr. Speaking of which, I really ought to get the install instructions uploaded..

September 3, 2007: 7:31 pm: ACT-R/S, Cognitive Modeling, jACT-R, Research, Spatial Reasoning

The dissertation has been sent out to the committee. Now I have two fun filled weeks to get settled back outside of D.C. and then figure out how to condense those 150+ pages into a 45 minute talk.Sounds like fun.

August 10, 2007: 4:37 pm: ACT-R/S, Cognitive Modeling, Spatial Reasoning

Have I mentioned how much I hate modeling dual-tasking? I will concede that using Dario & Niels’s threaded cognition does make things significantly easier – but it is still a royal PIA.

The initial stab at the dual-tasking model (spatial/verbal 1-back and pointing), was ok, but at some point between getting it working and running it at ICCM, something stopped working. Regardless, though, the model wasn’t quite what I was aiming for. The majority of the errors were timeouts, with very few errors.

The new one, is looking much better with spatial interference from other representations (without jacking up the base level noise) – but still not ideal. The threaded cognition, while a step in the right direction, depends upon a very simple but problematic assumption: buffers being empty. The idea is that any goal thread will block while the buffer it needs is occupied. If you follow this, you can get some decent interleaving. However, that kills how many productions work. Often times, chains of productions depend upon a chunk in the retrieval/imaginal/visual buffer.

It’s not such a challenge to harvest and reinstate these bits of information, but it is if the buffer has a capacity greater than one (the configural buffer). I’m relying on the pointing and 1-back to interfere with each other at the buffer level (stepping all over each other), which means I can’t depend on the buffer being empty as a semaphore. It’s an interesting balancing act. My solution for now is to have more productions that test for the occurrence of interference (i.e. buffer is empty but shouldn’t be or the wrong chunk is in the buffer) – which has a nice benefit that I can actually keep track of where the interference is occuring precisely.

The models are running now, we’ll see. One thing that I definitely cannot account for yet is why there are more errors in verbal 1-back & rotate than stationary. There’s just no clear theoretical position from ACT-R’s perspective. meh.

August 9, 2007: 8:40 pm: Publications, Spatial Reasoning

Harrison, A.M. (2007) Reversal of the Alignment Effect: Influence of Visualization and Spatial Set Size. 29th Cognitive science society conference. (Paper)

: 8:37 pm: ACT-R/S, Cognitive Modeling, Publications, Spatial Reasoning

Harrison, A.M. (2007) The Influence of Spatial Working Memory Constraints on Spatial Updating. 8th International Conference on Cognitive Modeling. Doctoral consortium. (Paper)

June 7, 2007: 7:14 pm: ACT-R/S, Cognitive Modeling, Spatial Reasoning

Check it out.. ACT-R/S bugs crushed and this is the latest, greatest model fit:

(more…)

June 4, 2007: 7:35 pm: ACT-R/S, Cognitive Modeling, Spatial Reasoning

Gotta love cognitive modeling. I’ve been looking at the parameter sensitivity on both egocentric and jrd pointing. The point is to fit the set size 4 and then apply it to set size 8 (I ditched 6 in the analysis, so I’ll ignore them again here).

I had a really nice fit (relatively) with common parameter values. RMSE combined (ego/jrd) of 0.9s, which is much smaller than the standard deviation of either ego or jrd pointing responses (error is in abeyance for now).

So I ran some bulk iterations. Damnit. The performance is dependent upon the activation balance between visually attended and spatially updated representations. Well, the random assignment of configurations and trials within a configuration is sufficient to blow that single test out of the water. Bleck. So, I guess I’m going to do this parameter search with bulk runs.

Ha!

May 24, 2007: 11:54 am: ACT-R/S, Spatial Reasoning

In my own dissertation data there is no evidence of online spatial updating (but updating nonetheless). It is really starting to look like Wang is the only one who is finding it.

This is prompting a reevaluation.

(more…)

Next Page »