October 2, 2008: 3:26 pm: ACT-R/S, Cognitive Modeling, Research, Spatial Reasoning

The past month has seen me up to my eye-balls in spatial modeling. I’ve been blasting out models and exploring parameter spaces. I’ve been doing all of this to get an ACT-R/S paper out the door (crazy, I know). I’ve got a single model that can accomplish two different spatial tasks across two different experiments. However, fitting the two simultaneously looks impossible. Inevitably this is due to mistakes with the model and the theory, but how much of each?

Is it a serious theoretical failing that I can’t zero-parameter fit the second experiment? Given how often modelers twiddle parameters between experiments, I doubt this. However, I’m proposing an entirely new module – new functionality. The burden of proof required for such an addition pushes me towards trying to do even more – perhaps too much.

After much head-bashing (it feels so good when you stop), and discussion, I’ve decided to split the paper in two. Submit the first experiment/model ASAP, and let the model and theory issues surrounding the second percolate for a few months. While this doesn’t meet my module-imposed higher-standards, it does have the added benefit of being penetrable to readers. The first experiment was short, sweet, with a cleanly modeled explanation. It makes an ideal introduction to ACT-R/S. Adding the second experiment (with judgments of relative direction) would have been far too much for all but the most extreme spatial modeler (as many of those as there are).

I just have to try to put the second experiment out of my mind until the writing is done… easier said than done.

October 31, 2007: 5:13 pm: ACT-R/S, Spatial Reasoning

The player/stage — CommonReality bridge is coming along nicely. I was able to start on it yesterday (finally my machine is on the network), and quickly got CR controlling very basic things. Today I got the execution and threading done so that sensors can be processed in parallel. Hell, I’ve even got a test model that is attending to the objects that its seeing in the environment. Very cool.

There do appear to be a handful of issues with respect to the generation of unique simulation objects, but it shouldn’t be too challenging to resolve. Tomorrow I should be able to start considering how to make it all work with the dissertation model.

I have to say this has gone much easier than expected. I do, however, have to figure out a more robust way to handle token/type/value relationships. While I could use the blobfinder and just assign each element to a unique channel (ugh), I think a better route would be to use the fiducial simulator which can give me an identifier and an orientation. This should just be as easy as building another ISensorProcessor (much like the one for blobs) – but I’m afraid I may have to use some ghettotastic sensor fusion to get this to work.

I’m thinking I’ll turn this process into a multi-part article for jactr. Speaking of which, I really ought to get the install instructions uploaded..

September 3, 2007: 7:31 pm: ACT-R/S, Cognitive Modeling, jACT-R, Research, Spatial Reasoning

The dissertation has been sent out to the committee. Now I have two fun filled weeks to get settled back outside of D.C. and then figure out how to condense those 150+ pages into a 45 minute talk.Sounds like fun.

August 10, 2007: 4:37 pm: ACT-R/S, Cognitive Modeling, Spatial Reasoning

Have I mentioned how much I hate modeling dual-tasking? I will concede that using Dario & Niels’s threaded cognition does make things significantly easier – but it is still a royal PIA.

The initial stab at the dual-tasking model (spatial/verbal 1-back and pointing), was ok, but at some point between getting it working and running it at ICCM, something stopped working. Regardless, though, the model wasn’t quite what I was aiming for. The majority of the errors were timeouts, with very few errors.

The new one, is looking much better with spatial interference from other representations (without jacking up the base level noise) – but still not ideal. The threaded cognition, while a step in the right direction, depends upon a very simple but problematic assumption: buffers being empty. The idea is that any goal thread will block while the buffer it needs is occupied. If you follow this, you can get some decent interleaving. However, that kills how many productions work. Often times, chains of productions depend upon a chunk in the retrieval/imaginal/visual buffer.

It’s not such a challenge to harvest and reinstate these bits of information, but it is if the buffer has a capacity greater than one (the configural buffer). I’m relying on the pointing and 1-back to interfere with each other at the buffer level (stepping all over each other), which means I can’t depend on the buffer being empty as a semaphore. It’s an interesting balancing act. My solution for now is to have more productions that test for the occurrence of interference (i.e. buffer is empty but shouldn’t be or the wrong chunk is in the buffer) – which has a nice benefit that I can actually keep track of where the interference is occuring precisely.

The models are running now, we’ll see. One thing that I definitely cannot account for yet is why there are more errors in verbal 1-back & rotate than stationary. There’s just no clear theoretical position from ACT-R’s perspective. meh.

August 9, 2007: 8:37 pm: ACT-R/S, Cognitive Modeling, Publications, Spatial Reasoning

Harrison, A.M. (2007) The Influence of Spatial Working Memory Constraints on Spatial Updating. 8th International Conference on Cognitive Modeling. Doctoral consortium. (Paper)

June 10, 2007: 7:36 pm: ACT-R/S, Cognitive Modeling, jACT-R

new versions of jACT-R, IDE, and ACT-R/S have been posted. I will be putting together a screencast soon to illustrate how to set up automatic model fit calculations and parameter space searches.

June 7, 2007: 7:14 pm: ACT-R/S, Cognitive Modeling, Spatial Reasoning

Check it out.. ACT-R/S bugs crushed and this is the latest, greatest model fit:


June 4, 2007: 7:35 pm: ACT-R/S, Cognitive Modeling, Spatial Reasoning

Gotta love cognitive modeling. I’ve been looking at the parameter sensitivity on both egocentric and jrd pointing. The point is to fit the set size 4 and then apply it to set size 8 (I ditched 6 in the analysis, so I’ll ignore them again here).

I had a really nice fit (relatively) with common parameter values. RMSE combined (ego/jrd) of 0.9s, which is much smaller than the standard deviation of either ego or jrd pointing responses (error is in abeyance for now).

So I ran some bulk iterations. Damnit. The performance is dependent upon the activation balance between visually attended and spatially updated representations. Well, the random assignment of configurations and trials within a configuration is sufficient to blow that single test out of the water. Bleck. So, I guess I’m going to do this parameter search with bulk runs.


May 24, 2007: 11:54 am: ACT-R/S, Spatial Reasoning

In my own dissertation data there is no evidence of online spatial updating (but updating nonetheless). It is really starting to look like Wang is the only one who is finding it.

This is prompting a reevaluation.


May 23, 2007: 1:59 pm: ACT-R/S, Cognitive Modeling, jACT-R

Man, I’d forgotten how interesting modeling could actually be. Since the critical path for the dissertation is currently the models, I’ve shifted from writing to modeling full bore.


Next Page »