Alabaster Update

Some progress on speed and content, but more still to do. Details, with some spoilers, after the cut.

On the issue of speed, my attention has been redirected a bit by the discussion here, to the effect that TADS 3 gets slow on scope/parsing issues when there are a large number of objects in contemplation. Obviously, it’s a different system, but I think we have something similar going on: at this point, Alabaster is mainly choking when it’s called on to make parsing and scope decisions about a body of 400+ objects, and running “does the player mean” rules to make choices between ambiguous selections in that body. This is dumb of me, but before reading that thread I hadn’t thought of analyzing the drag on the system produced by object-matching when there were that many different objects in consideration. A few tests showed that, on a turn when the player says something very ambiguous, the game may need to run the “does the player mean” rules upwards of a hundred times in order to score the potential matches.

If I can narrow the scoping, that may help; worst case scenario would be to hand-write some I6 routines for noun selection to build in some assumptions that Inform can’t really make in general, but *can* make for quip selection. But that may not be necessary. We will see.

This issue is itself going to be very dependent on game design choices. My major (non-collaborative) WIP using conversation doesn’t slow as much as Alabaster because, though it has as many quips, it is also divided into numerous scenes. That means that quips can be tied to specific scenes, and that means easier optimization. Alabaster is a better stress-test for the system in that respect.

Progress made in the last few days (which represents no more than a few hours’ work, because family Christmases are pretty consuming):

  • Revamped cache-generation so that it is now easy to hard-code the cache of quip information. This completely removes the start-up delay. The solution will be portable to other games using the same system.
  • Systematically reviewed all quips on the basis of some more beta-testing feedback, which suggested that the conversation was not being affected enough when the player learns that Snow White is possessed. In particular, it was still possible to have a number of exchanges that were phrased as though the player still thought she was sorta-kinda-normal. There are now some 67 quips that are marked as “negated by” this fact — that is, they become invalid pieces of conversation as soon as the player knows the new information.
  • Since fact-knowledge has an increasing effect on the availability of quips, added some optimizations to move certain quips in and out of consideration when the player learns certain critical information. It’s not really practical to do this *every* time the player learns a fact (because then we lose so much time checking which quips are disqualified that it more than makes up for any speed gains); but for a handful of critical game-changers, I think it’s a good trade.

I’ve also had an offer of help on the art, which I’m excited about. So hopefully that will come together.

The other major thing I need to work on is Happy. Snow White is a more complicated character in many ways, but she is essentially reactive more than active. Happy, on the other hand, is trying to get the player to do something. At the moment, his conversation direction is mostly scripted — the backbone of the conversation remains the same, and while the player can slow it down by asking tangential questions or speed it up by introducing some quips before Happy gets a chance to, there is little essential variation. The scripted actions are plausible enough if the player comes into that conversation with limited knowledge (doesn’t know Snow White is possessed, in particular). It doesn’t all work *nearly* as well if the player knows a fair amount about other topics and tries to steer the chat to a conversation about demons rather than about vampires.

So we need something closer to AI than the current queuing system in order to make Happy behave plausibly: we need him to assess what the player knows (and since Happy is omniscient, he can have perfect understanding of what the player currently knows, which is one convenience) and choose his line of persuasion accordingly. And we need that approach to be flexible and alter from turn to turn if the player seems to be fixated on one issue or another.

That should not be too hard to hook in to the rest of the conversation system. I designed it with the idea that underneath all the other layers there might be some less-scripted, more AI-like way of working out what the interlocutor wants to say next, and it should not be hard to say, instead of just “say the next quip you’ve been queued to say”, something more like “think about the current world situation and pick a line of argument to follow”.

But it will take a little work to decide what rules should govern Happy’s motivation in various circumstances.

So, taking everything together, that leaves:

  • (Still on the list): figuring out that obscure too-many-activities bug.
  • Optimizing individual turns more by trying to reduce the scoping and parsing work needed on any given occasion.
  • When I’ve done all the optimization I can do myself, taking up an offer of help with profiling the remaining issues.
  • Hooking the artificial intelligence into Happy.
  • Incorporating artwork, assuming that works out.
  • Doing some automated testing on the new additions.
  • AFTER all that: performing at least one more round of human beta-testing on the core project, because that is enough new material that it is bound to introduce a heap of bugs.
  • (Not necessarily before the release of Alabaster): work on a Processing-based visualization tool.
  • Interpreter-testing to make sure that any multimedia elements work cross-platform and that the speed is acceptable on various machines (I am not going to target PDAs here because I think that’s basically a lost cause, but I want to see us offer something acceptable on a range of modern desk/laptops).

My personal goal was to get this done by the end of 2008, but there remains too much work to complete in such a window. That is disappointing.

7 thoughts on “Alabaster Update”

  1. Hey Emily, I stumbled upon the too-many-activities bug when doing something completely different. It’s an Inform bug. This will reproduce:

    ———
    “activitybug” by Ron Newcomb

    Distant is a room.

    Understand “[any thing] bar” as barring. Barring is an action applying to one visible thing.
    ————

    And then type in “foo” at the player’s prompt 20 times in a row. It’s the “any” in “[any thing]” that causes it — the Printing A Parser Error is started twice in a row, but only finished once, so the number of Activities currently in use ratchets up by one on every parser error thrown.

    -Ron

Leave a comment