You are here

On implementing a theory of the mind (or: What took you so long?)

AHasvers's picture
Submitted by AHasvers on Mon, 01/04/2016 - 01:49

And a happy new year!

So, why haven't I posted anything since time immemorial? (except for a silly joke)

The reason is simple: on one hand, lots of real work, on the other hand, improving the engine became a lot like work. But not for the reasons you would expect: it was not a problem of it being tedious. Well, sure, some if was and that didn't help.

The real problem though, is that I stumbled into an actual CS research problem. See, what I was trying to do was supposed to be simple: characters have either positive, neutral or negative belief about various statements, and there are logical links that are activated by certain statements and reinforce or oppose other statements. Characters come with only a few fundamental "biases" i.e. preexisting beliefs; all their other positions are simply derived logically from those few axioms. Explore the beliefs of others until you find their biases, and find counter-arguments to the biases you don't like - and BAM, you've rhetoricized the other characters into agreeing with you.

That was the plan.

But I didn't want it to be a perfect information game. I wanted there to be a sort of fog of war. The weak kind of uncertainty is that characters don't have all their beliefs in mind all the time, and they can ignore the existence of certain beliefs and links.

The strong kind of uncertainty is that you don't know exactly how the others feel about every single statement, you infer it from what they say, and from how they react to what you said on a scale of agreement-disagreement. That's extremely unwieldy if the player has to keep track of all such details, so my plan was to give them an automatically-computed map of the estimated beliefs and biases of other characters -- that could be more or less accurate, depending on the skill of the PC for perceptiveness and other characters for deception.

That sounds exciting, huh? Well it does to me. But that's where it gets bad.

First, I should mention: what's the point of having and stating neutral beliefs at all? Well, let's say you voice a noncommittal statement, like "Maybe it was the butler all along, maybe not". It can be a first step in making a link such as "Maybe the butler did it, and in that case, the knife must be in the pantry". Now that you've said that, you *know* that the other characters know about it, and therefore, if they do have a belief about the butler, you are sure that they will take it into account when they think about the location of the knife. That's how you can convince them of something that you're not sure yourself.

That was neutral beliefs interacting with "weak" uncertainty. But how do they interact with "strong" uncertainty? You are noncommittal, so you cannot deduce whether they agree or disagree with you - unless they actually voice their opinion (which they could do to reclaim possession of the topic for scoring purposes, but that's another topic). So you now have a statement where you have a large uncertainty on what someone else believes. Okay, not so bad, is it?

It is bad.
Because that uncertainty propagates. It means my characters are not doing logic, but Bayesian inference. Let's say someone disagrees with the fact that the knife might be in the pantry, even though you pointed out the link to the butler. Does it mean that they are biased against the knife being in the pantry (because they don't want you to go there perhaps?), or against the butler being the culprit? If you knew their positions on everything else in the world, you might be able to decide; as such, knowing where they are biases lay becomes a parametric inference problem where you are trying to find the distribution of parameters (biases) that best fit the data of what you can infer from the others' statements and agreement to yours, while respecting certain constraints (as few biases as possible, aka Occam's razor). That's not quite black magic, there are lots of good methods for that sort of things in machine intelligence textbooks, but that's another level of complexity entirely - and it's never been piled on top of the sort of logic I'm using.

So, around last September, I finally got the rest of the interaction model to work: you say something, they (maybe mis-) interpret what you say, they react to their interpretation, you (maybe mis-) perceive their reaction, and you infer what they think about what you said. That was already tricky.

But I didn't go into the whole uncertainty thing, only used very simple estimates with no probabilistic computation at all. And it's come back to bite me by creating all sorts of exotic bugs leading to the PC completely misrepresenting the beliefs of other characters for no good reason. especially complicated by feedback loops on the network of beliefs.

And here I am. Today, I have had to decide: do I want to make a game, or a state-of-the-art AI model of argumentative interaction? The answer was surprisingly hard to find. It's not your typical case of feature creep because, well, the game was meant to research the mechanics almost as much as the reverse. But okay, I need to make a game more than I need to make AI. Which means I'll scrap the strong uncertainty for now, and pray it doesn't kill the vision entirely.

That was the story, lasses and lads. Welcome to 2016.