March 18, 2009: 12:12 pm: Cognitive Modeling, Errata

I just finished a little article on the motivation and methods of dumbing down game AIs. It’s particularly interesting in that it makes for a good case in point regarding how cognitive science and traditional AI differ.

The article starts off by commenting on the challenges of less-than-perfect AIs, which is interesting in its own right. Traditional AI is often concerned with the optimal, deterministic, and efficient solutions to a given scenario. As a cognitive psychologist, I’m more concerned with the solution that best matches human performance. And it is this focus on human-like performance that dumbing down the AIs is all about.

The first possible resolution presented (and summarily dismissed) is to reduce the amount of computation performed. This rarely works as it results in completely idiotic behavior that even novices would be loath to exhibit. From a cognitive modeling standpoint, novice/expert distinctions are primarily represented as knowledge base, skill, and learning differences – but the total computational time is relatively unaffected. Novices are novices not because they can’t think as long and as hard about a problem, but because they lack the relevant strategies, experience, and learned optimizations.

Instead, the author argues, AIs should “throw the game” in a subtle but significant way (i.e. make a simple mistake at a pivotal point). This is actually fairly easy to do assuming you have an adequate representation of the scenario, and computer games are virtually always blessed with omniscience. What’s most interesting is that this is effectively scaffolding in the Vygotskian sense, with the AI opponent acting as a guide in the player’s skill development. If the AI is aware of the skill-level of the player (and not in the gross easy/medium/hard sense), perhaps through a model tracing mechanism, it can tune its behavior dynamically to provide just enough challenge. A technique that has been used in cognitive tutors for quite some time now.

The author also points out the utility (and failings) of reducing the accuracy of the AI’s information. This particular issue has always stuck in my craw as a gamer and as a psychologist. Perfect information is an illusion that can only exist in low-fidelity approximations of a system. Ratchet up that fidelity and the inherent noise in the system starts to become evident. Humans are quite at home with uncertainty (or we just ignore it entirely at the perceptual level). One of the easiest ways to dumb down an AI is to give it the same limitations that we have, but don’t impose new artificial limitations. It’s not about probabilistically ignoring the opponent’s last move, but rather not letting it see past the fog of war in the first place. Don’t add small random noise to the pool shot trajectory, rather make it line up the shot as we do, with perceptual tricks & extrapolated imaginal geometries.

Cognitive science would dumb down the AI not by introducing noise, clever game throwing, or similar crippling, but by introducing the same limitations that humans possess. The limitations of perception, action, memory, attention, and skill are what make us the adaptable agents that we are. All of this is just as a point of comparison. Cognitive modeling is still more research than application (with some notable exceptions). However, I can see a near-term future where game developers focus on developing human-like opponents not through clever programming, but through an actual focus on how the human actually plays.

July 5, 2008: 6:15 pm: Errata

Recently I finished (in so much as any software I develop is actually ever finished) a tool that allows bulk jACT-R model runs to be submitted to a remote server. All in all, it was an amazingly painless experience made possible by ECF, a protocol neutral communications library in the latest release of Eclipse. To be completely honest, while I’ve had dreams about this tool for a long time, it wasn’t until I saw the webinar for ECF that I realized just how feasible this project was.

What follows are my thoughts and impressions on using ECF. Before I begin, let me include this disclaimer: for all things networking, I am self-taught. And like all self-taught men, I had an idiot for a teacher. However, I do have a great deal of experience building networked tools for robotics, modeling and my own edification. Up until this point I have relied upon MINA, another slick networking framework – but one that sticks closer to more traditional networking models.

I’d like to start off with the positive points.

  • The use of the IContainer and adapter paradigm makes it really quick to get started using the framework, particularly for those familiar with Eclipse.
  • The generic (and serializable) IDs make the tracking of connections absurdly easy (e.g. I use them as map keys to keep track of processing jobs and the clients & servers they are associated with across invocations of the IDE. Quit, start up again, and everything is right back the way it was).
  • The discovery api is incredibly simple. Having tried to use zeroconf years back, this is a dramatic improvement.
  • Retrieving files via known standard protocols is also a snap (I use it to fetch the archives of the runs and results, with a dynamic Jetty instance to server them). From what I can tell, virtually all container instances support a configurable url file fetch mechanism (file:, http:, scp even).
  • The use of channels makes arbitrary messaging (read as: serialized POJO) quick and painless, particularly since most containers support it with relatively little difficulty.

Before I present my gripes, let me just say that the folks behind this project have done an awesome job. It took about a week to get this whole tool up and running. But there were some headaches, mostly due to documentation (availability, when it’s there it’s good) and mental model mismatches.

Clients as Servers

If you want to build a server, you’ll probably look at the “ecf.generic.server” container, thinking that you’ll hook up to it some connection policies, channel handlers and be done with it. Not exactly. The ecf generic server acts as a hub. Your actual server will be a client that connects to the ecf server in order to talk to the actual clients that connect to it as well. I can understand the reasoning here, but this doesn’t map very cleanly onto most conceptions of a server. Looking through the developer mailing list, I see that I’m not the only one who’s gotten a bit confused.

This is magnified by how the generic server handles channel communications. When a client connects to it, it is assigned a unique ID, it will be up to you to determine which IDs correspond to the server-client and the clients-proper.

I am somewhat surprised that there is no IServerContainerAdapter that permits the setting of connection policies and the like. Something that more closely maps onto the traditional concept of a server. Something less general than the ecf generic server that can be extended into something very specific. As it stands, I don’t see a way to quickly build a server that non-ECF clients can access. (But I’m probably wrong).


I love them OSGi services, and ECF is supporting them as well. But I’ll be damned if I could find any of the file sharing services. I don’t know if it’s missing or I’m clueless, but I wasted a couple of hours looking for said services. Fortunately, they, like all good Eclipse citizens, include their source code. In there I discovered that most containers support the url file transfers right out of the gate. Doh!

Where are those container names?

ContainerFactory.getDefault().createContainer(String) is the most document method for creating a new container, yet finding the list of current container names can be a pain (short of PDE extension point search). There is a providers list, but it doesn’t include the names. After digging I finally found it on the eclipse site proper. Since the wiki is actually the first place the ECF project page on directs you to, stumbling back to the eclipse site again seemed like the wrong path to me. I dunno. If it were me, it would be at the top of the providers list and maintained regularly (ecf.generic.fileshare” is apparently a vestigial remnant from earlier versions).

Anyway, as I mentioned, these are small gripes for what is an otherwise exceptional piece of work, particularly when you consider all the functionality that this is permitting. Good job, guys.

May 6, 2008: 3:23 pm: Errata, jACT-R, Robotics

I am. I am.

Surprise, surprise, my previous rant was completely unjustified. Turns out it was my own stupid fault. I was programmatically launching player, unfortunately I forgot to harvest the program’s stdout and stderr. The process’s buffers were flooded. Stooopid Ediot!

But, at least the monkey sims can run for much longer now. And not a moment too soon as the robot lab is rapidly filling with interns. (It’s such a nice change of pace to no longer be on the bottom of the totem pole)

May 5, 2008: 3:59 pm: Errata, jACT-R, Robotics

I love division of labor, it saves us all from having to reinvent the wheel.. but sometimes it just drives me insane.

I’ve hooked up player/stage through a java client, enabling jACT-R (and the monkey models) to interact in the simulated robotic environment. For sometime now I’ve had a bug where stage would freeze after around four minutes of simulated time. I initially thought it was due to my unorthodox use of the client (the threading model in the client is a tad weak), so I reverted it back to the normal usage and the simulations were now running past four minutes with no problem. Yay, problem fixed!

Nope. Now it dead locks around 6 minutes. So maybe it’s a problem with stage? Lo and behold, someone else was encountering a similar problem. The recommended fix? Upgrade to playerstage 2.1, which is, of course, not support by the client yet and no one has posted any word on an ETA.

I am able to detect when it occurs, which lead me to believe that I’d be able to disconnect the clients and reconnect. Unfortunately, the only way to resurrect stage at this point is to completely kill the socket owning process. My only option is to not only disconnect the clients, but then force quit stage and restart it.. Ick!

I think it’s time to work on something else for a little while.

* This is not meant as a disparagement of anyone’s work, rather just me venting. I fully acknowledge that the same statements could be- (and probably have been-) applied to my own work.

April 16, 2008: 5:03 pm: Errata, jACT-R

After finally getting the movement tolerances working the jACT-R so that the robo-monkeys could finally see each other as they move around, I came upon a motor bug. Actually, I’m still trying to hunt down the exact circumstances under which it occurs. It’s particularly challenging because it involves the interaction of the robot/simulation sending a movement status and colliding with the model’s desire for things to complete in a predictable amount of time.

Since this bug is so hard to track down given the current tools, I decided it was about time to implement a long desired feature. In my experience, and the reports of others supports that, most of us just run the models and when something goes wrong we dig through the trace. Only as a last resort do we ever really step into the models. (Basically, all that work I did to integrate break-points and a debugger were for naught :) )

However, once we’ve found where something went wrong we immediately want to know what fired (easy enough from the trace) and what the state and contents of the buffers were (not so easy). The log view jACT-R has provided is good, but not great. The tabular format and filters makes it easier to ignore irrelevant info, but you still don’t get a clear picture of the buffer contents. To rectify this I’ve added the ability to dynamically modify the tooltip for the log viewer. Combined with the buffer event listener and the runtime tracer, the log view can now display the full contents of any buffer as well as its flag values both before and after conflict resolution.

Buffer content tooltip

The buttons on the tooltip allow you to toggle between seeing the buffer contents before and after conflict resolution. It’s not completely correct right now in two regards: some state information may be lost at the start and end of a run (i.e. your model starts with buffers pre-stuffed or the runtime terminates before sending the last bits of info), and the changes to the chunks while in the buffer are not being tracked. The first issue I don’t care about, but the second will be addressed soon.

This added information does carry with it a moderate performance penalty so I’ll be including it as a runtime configuration option. A little later I will also add tooltip information for the declarative (i.e. visualization of encoded chunks) and procedural (i.e. fired production and resolved instantiation, encoded productions) module traces.

But for now, I’m quite happy and this is making tracking down that spurious motor.state=error soooo much easier.

October 23, 2007: 3:53 pm: Errata

Things have been quiet as I have been getting acclimated to the new job at NRL. Due to some snafu, after two weeks I still don’t have a computer. While it had been ordered by my boss a month before I arrived, procurement didn’t order it until a week after I’d arrived. Supposedly it has been delivered, and so now it is “days, not weeks” until I will have it.

Anyway, a plan of attack has been assembled for the next few months. The first order of business is to get the long awaited Player/Stage simulator up and running so that ACT-R/S can actually operate within a dynamic spatial environment.

Up next will be building a model of someone else’s data and then migrating some preexisting robotic models to use ACT-R/S. Basically these first few months are slated for engineering problems as well as getting my feet wet with the current work of the others in the lab. This is fine by me, as the subsequent research path still needs to be considered in greater depth.

The spare time will also be filled with getting the jACT-R site back up and running with documentation and examples. The code base also needs some updating to handle the latest changes to Eclipse and its new execution environment (for macs).

Now that I’m back in the flow, look for weekly updates both here and at jACT-R.

May 14, 2007: 6:09 pm: Errata

So the post-doc application was submitted. That was fun – let’s see if I can get a job teaching robots to play hide and seek.

Now that said distraction is out of the way, I’ve been modeling and connecting. This has resulted in a long overdue rewrite of common reality. Now it just needs to be tested.

Methods section is in process right now. So fun. :)

April 23, 2007: 3:29 pm: Errata

I am the master of poor timing. I’d been aiming for a post-doc at NRL, but I’d held off on officially touching base with my contact until more progress had been made.


March 30, 2007: 6:27 pm: Errata, jACT-R, Spatial Reasoning

Ahh, Pittsburgh. One more week of data collection is in the can and it will be the last data collection trip to Pittsburgh. What more data needs to be collected can be done in Philly. In theory, I shouldn’t have to come back to Pitt until defense time. One can dream.


March 29, 2007: 10:22 am: Errata

Here’s a great post on the differences between brains and computers at science blogs.

Next Page »