Archive for April, 2008

April 16, 2008: 5:03 pm: Errata, jACT-R

After finally getting the movement tolerances working the jACT-R so that the robo-monkeys could finally see each other as they move around, I came upon a motor bug. Actually, I’m still trying to hunt down the exact circumstances under which it occurs. It’s particularly challenging because it involves the interaction of the robot/simulation sending a movement status and colliding with the model’s desire for things to complete in a predictable amount of time.

Since this bug is so hard to track down given the current tools, I decided it was about time to implement a long desired feature. In my experience, and the reports of others supports that, most of us just run the models and when something goes wrong we dig through the trace. Only as a last resort do we ever really step into the models. (Basically, all that work I did to integrate break-points and a debugger were for naught :) )

However, once we’ve found where something went wrong we immediately want to know what fired (easy enough from the trace) and what the state and contents of the buffers were (not so easy). The log view jACT-R has provided is good, but not great. The tabular format and filters makes it easier to ignore irrelevant info, but you still don’t get a clear picture of the buffer contents. To rectify this I’ve added the ability to dynamically modify the tooltip for the log viewer. Combined with the buffer event listener and the runtime tracer, the log view can now display the full contents of any buffer as well as its flag values both before and after conflict resolution.

Buffer content tooltip

The buttons on the tooltip allow you to toggle between seeing the buffer contents before and after conflict resolution. It’s not completely correct right now in two regards: some state information may be lost at the start and end of a run (i.e. your model starts with buffers pre-stuffed or the runtime terminates before sending the last bits of info), and the changes to the chunks while in the buffer are not being tracked. The first issue I don’t care about, but the second will be addressed soon.

This added information does carry with it a moderate performance penalty so I’ll be including it as a runtime configuration option. A little later I will also add tooltip information for the declarative (i.e. visualization of encoded chunks) and procedural (i.e. fired production and resolved instantiation, encoded productions) module traces.

But for now, I’m quite happy and this is making tracking down that spurious motor.state=error soooo much easier.

April 9, 2008: 3:30 pm: Big Ideas

First off, I’m not talking distributed representations, no, this is entirely about implementation.

While the community is still some ways off from needing something like this, in the future many of us will be running models across much longer time-spans, both simulated and real. These models are going to seriously tax the underlying implementations. Currently jACT-R has no problems with chunk counts in the hundreds of thousands, but millions? And what about productions? Most hand-coded models don’t exceed a few dozen productions, but with production compilation enabled for a long period in dynamic environments, it’s easy to envision scenarios where the production counts will be on par with the chunks.

This scenario is rapidly approaching for us working with robots operating over anything but the most trivial time scales.

The big information companies out there manage this volume of information easily with existing database systems. Why should the ACT-R implementations have to manage their own information searches, linkages and retrievals when there are specially tailored systems that can do it so much better?

Imagine if all chunks, types and productions were backed by something like MySQL. ACT-R could be freed to maintain only the most active subset of procedural and declarative memory, while less active information would be managed by the database. This scenario carries with it a slew of useful benefits.

Scalability

The most obvious benefit of database backing would be in scalability. Database systems are designed to handle data quantities well beyond what we have attempted to this point, but quantities that are certainly within the realm of those required by long running models. By limiting ACT-R to only the immediately relevant information set, the architecture would require significantly fewer resources that could be better leveraged elsewhere.

This information subset could be intelligently managed using information we already have available. As the chunks in a buffer are changed, relevant chunks could be prefetched by following the associative links. The declarative subset would effectively be managed through the activation levels. Similarly, productions could be prefetched based on the chunktypes of the chunks currently in declarative memory. As the chunks and productions are deemed to be no longer relevant, they can be committed back to the database system (if changed) or discarded outright.

Persistence

Obviously, anything stored in a database is persisted. This gives us the ability to created bootstrapped models. I can’t count the number of times I’ve heard researchers wish they could have preexisting models of some user group. This scheme would allow us to have specific databases of different population types and knowledge. With access restrictions, we could even set up domain specific databases (i.e. mathematics knowledge) and constrain what any connected model could access (i.e. up to and including algebra education). This applies to declarative and procedural knowledge equally.

With some neat use of caching database systems, you could set up a centralized immutable system with a common knowledge base, accessed by each lab’s local database system which is accessed by all the researchers, who themselves have a local system running with write privileges for their respective models. Similarly, there is no reason that any one model would be tied to only one database system. Need a model of a high-school senior who plays video games? Point the model to an aggregating database that provides access to separate high-school level knowledge bases and then point it to a video game specific database.

Performance

Much of the scalability benefits are directly relevant to performance. What’s really nice about having multiple database systems is that the architecture could make multiple simultaneous search requests to all the databases and harvest the first result. With network latencies decreasing every year, it’s not unreasonable to expect that infrequently used information would still be accessible faster than realtime.

But how?

I can’t speak towards how one would implement such a scheme in Lisp. The lack of real threading across most implementations makes this a challenge. In jACT-R this would be relatively simple (it’s still one heck of an undertaking).

For maximum performance you’d want the system to engage in asynchronous management of the relevant information subset. Fortunately, both the procedural and declarative modules already support asynchronous modes of operation. Each would attach listeners to the buffers (for chunk additions and removals) as well as to the chunks in the buffers (to catch any changes). All the chunks associated with these active chunks could then be prefetched. Similarly, their chunk-types would be used to prefetch the productions that depend upon those chunk-types.

Since all major entities (types, chunks, productions) are represented as handles, the system can create and return these handles before the contents have been fully received and recreated from the databases. This allows the architecture to work with the chunks and productions before they are actually available, further exploiting the asynchronies. (Obviously, if the chunk or production’s contents need to be accessed, the system will block until that data is available).

While this prefetch mechanism would work well for information directly connected (through some depth), it doesn’t address the retrieval of information specified at runtime by productions. Fortunately, the retrieval module relies upon the synchrony of the declarative module. It would merely make a request from the database and immediately return a handle to a chunk. If no result is received within a specified window of time (or an empty response returned), the declarative module would just redirect the handle to point to the standard error chunk.

As chunks and productions become irrelevant, they can be checked for any parameter changes. Any changes can then be sent back to the database for persistence. Newly created chunks would be handled in the same way, only committing when the chunk is deemed to be contextually irrelevant. Fortunately, Java already supports a mechanism to keep track of object references, so other modules would still be able to prefetch information and retain it indefinitely without fear of it being reclaimed.

jACT-R already has a simple format that is easily persisted and searched in databases, and is already used to transmit information in an agnostic manner across the network: the ASTs used to bi-directionally translate between source and model.

In theory, all of this could be accomplished by extending the default implementations of the declarative and procedural modules, with custom implementations of the chunk, type and production handles (to notify the modules when they are no longer relevant).

Will this idea ever see the light of day? I sure as hell hope so. Is it anywhere on my horizon? Nope. Little old me still needs to bump up the pub count.

: 2:21 pm: Big Ideas

For the past few days I’ve been working on implementing the movement tolerance handling in the visual system (and aural, configural, manipulative), and I’m really tired of it right now. It was supposed to have been easier than this (hell, it should have been done last week) – but it ultimately required some significant refactoring of the asynchronous caching of afferent chunk encodings. Total headache.

Now as I’m getting to the final tidbits, I’m incredibly bored with it. Add to that the fact that I prefer my code to be readable, twice now I’ve refactored what I’ve already done. It will be done this week..

..but for a distraction, I’ve added a new category to document some of the crazy cool ideas I’ve been playing with in the back of my head. There are some that I converse with who believe these ideas should be held close to the chest until I can get prototypes assembled, but I’d rather throw them out there, see if they have any resonance with folks and to get some additional feedback.

Without further ado