There’s been an idea percolating in the back of my mind for the past few weeks. It all started with the boss’s boss, Alan Schultz’s desire for a cognitively plausible mechanism for robots to explain what (and presumably why) they are doing what they are doing. While for Alan this was a practical desire, it struck me that from a cognitive psychology perspective what he was asking for was actually much deeper and more interesting.

What he really wanted was to have robots engage in concurrent verbal protocols (Ericsson & Simon, 1993). This is a general task that asks people to describe what they are doing while they are doing it with as little filtering or elaboration as possible. The idea is that these utterances represent the most basic description of the contents of working memory in service of the current goal. A great introspective tool (more so that retrospective protocols at least).

From a cognitive modeling perspective the challenge is to implement it in a general manner so that one can use the same productions across multiple models, regardless of goal structures. This means that any model of concurrent protocols should not be dependent upon interruptions of the current goal, but should operate in parallel (hence concurrent).

I do believe that I have a solution. Not surprisingly it relies upon one of my favorite contributed ACT-R modules, threaded cognitiion. Combined with the learning from instruction work that John and Niels have done, an interesting new system emerges.

Extending threaded cognition

Threaded cognition allows a model to engage in multiple goals at once, interleaving them at the production level. However, there is a level of isolation that prevents individual goals from being aware of each other (a good thing). For concurrent verbal protocols to work, there needs to be a mechanism to get at the other goal(s). A modest proposal is to add a query such as this:

(p what-else-am-i-doing
  =goal>
    isa verbal-protocol
    ....
  ?goal>
    other-goal =otherGoal
  ==>
   +retrieval>
    =otherGoal
   !output! (Im also doing =otherGoal))

The example is idiotic, but the idea is that when querying the goal buffer, you can request a reference to the other goal, which will bind to any goal in the buffer that is not the one that fired this production. With the other goal in hand, one can then begin to inspect it.

Learning from instruction

If the goal was learned via the learning from instruction work, you’d have a goal with a state slot. This state slot can then be used to retrieve from declarative memory the instructions that preceded and succeeds this one. In other words, you can launch a retrieval which will describe what it is doing. If one extended the learning from instruction work to include meta information for the justification for that instruction, you now have the ability to query the model for what and why it is doing what it is doing.

Levels of description

Of course, this will likely result in protocols that are very fine in their granularity. The states from learning from instruction are at the unit task level (i.e. reading a letter, pressing a button). This is not really that useful. If one extended the learning from instruction work to include progressively deepening goals (ala standard task analyses), the concurrent verbal protocols could then prefer to report at the level just above unit tasks:

(p avoid-unit-tasks
  =goal>
   isa verbal-protocol
   ...
  =retrieval>
   isa unit-task
   parent =parent
 ==>
  +retrieval>
   =parent)

One could then also query the model for more or less detail where it would then chain up or down the goal.

Beyond just a tool

In discussing this brainstorming, Greg Trafton realized that it may be more than just a damned useful tool. There may be genuine predictions and fits that can be evaluated with respect to the verbal protocol methodologies. Not only does this solution present a potential multi-tasking cost to performance of the primary goal, but the simple act of engaging in the protocols would change the state of the model.

With production compilation engaged, repeated protocols would become much faster as the instruction retrievals are compiled out (just as they were when learning how to do the core task). But more interestingly, the act of accessing the other goal via the retrieval buffer would lay down a memory trace for the partially completed goals. Without the concurrent protocol, the intermediate steps of the goal would be lost (or rather, never encoded). These partial goals may make further explanations easier, strengthen the instructions, provide further compilation opportunities, and (potentially) result in the improved learning that is seen when subjects are asked to explain their actions while learning a new task.

One could also look at the different predictions regarding concurrent and retrospective protocols.

What started as a potential solution to a practical problem, when situated within ACT-R, is now looking like a genuine theoretical construct that may get some serious mileage. Gotta love it when an idea germinates into something so much greater than expected.