Manu Sharma, Santi Ontañón, Manish Mehta, and Ashwin Ram are publishing some research into drama management using a simplified version of Anchorhead (with a choose-your-own-adventure interface rather than a text parser).
The full paper is worth a read (though fairly technical), but the gist is that they proceeded by taking a series of cases from players, determining which players liked which subplots. They then designed a drama manager that would compare the current player’s behavior against its cases, determine which subplots this player was likely to be most interested in, and hint the player in the direction of those subplots. The result appears to be a better experience especially for non-gamers, though some players (especially the more experienced ones) disliked being over-hinted in the direction of things they would enjoy.
When no confident predictions can be made from player predictions, the drama management model falls back on author-defined rules about what to present when.
I have a bunch of minor quibbles with particulars of the study, but found the conclusions intriguing.
Quibbles. First, the profiling of player behavior appears to have focused on the verbs used and not on the nouns, and I’m not sure that that’s necessarily going to get you as much information as would be useful. For instance, it’s not clear to me that counting the number of player movements, but not where the player was going, gets you enough information. It will tell you something about how restless the player is and perhaps whether (as the paper points out) he’s inclined to a breadth-first approach to playing; on the other hand, movement towards one place or another would seem to indicate a particular interest in the plot points being unfolded there. On the other hand, of course you have to draw the line somewhere about how much detail in player behavior you’re going to track — if there is too much, the useful correlations may get drowned out in the noise.
Second, one of the drama manager’s methods is to artificially block off subplots or routes of inquiry that it has decided will not interest this particular player. That (apparently) can mean locking or unlocking locations, moving objects from where they would usually be found, etc. These “deniers” were not universally popular with players, and I think they might well make the game feel a bit more arbitrary than it would otherwise.
Finally (and from my point of view most problematically) it seems to me that this management style might mean the player is introduced to the beginnings of many subplots, but never sees the end of the ones that the game decides are irrelevant to him. I suppose it’s conceivable that could work out, but I can’t help suspecting that some of the time you’d wind up with never-resolved loose ends.
Despite these gripes and proto-gripes, it’s a really interesting piece of work. The recommendations in the conclusion of the article ultimately seemed to me to have more to do with player guidance — how to help players who might not understand gaming very well by providing timely hints when they were lost, and using empirically-determined player models to find out when lostness was occurring — than they did with constructing a coherent and compelling narrative. But that’s certainly something still worth doing: good player help is hard to achieve. Some IF has experimented in this direction: Inevitable‘s multiple levels of puzzle difficulty do so in a light way, while Blue Lacuna and City of Secrets are both closer to the methods described in the article, providing active hinting in response to the game’s perceptions about player experience. What we have by and large not seen so far is the use of data-driven player models that would allow the game to assess the player’s current situation by comparison with previous data.