Memory and Knowledge for Characters

[For a couple of years now, I’ve reserved the first Tuesday of the month for a review of a book on writing or game design that might be of interest to IF folks. I’m still doing one of those in March, but it will come out the 19th, while I’m at GDC.

Instead, this post is part of a short series on Character Engine and what we’re doing at Spirit AI. I’m writing these posts with IF and interactive narrative folks in mind, but more general-audience versions of the same content are also appearing on Spirit’s Medium account. Follow us there if you’re interested in hearing regularly about what Spirit is up to.]

Knowledge and memory are a somewhat vexed area for game characters. It’s easy to think of characters who don’t remember the last fifty times you asked them the same exact lore question, or are strangely forgetful about the ways you’ve harmed them, or who aren’t equipped to answer common-sense questions about the world they live in.

So why is this a problem? Simply recalling that something has happened is not the main challenge. We can set flags; we can assign variables; we can check on quest journals to see what the player has already done. We can refer back to whatever data store is otherwise tracking world state in this game.

The hard part is building a system where

  • everything important to remember is stored in a reasonably systematic way
  • differences between world truth and character knowledge are handled as much as (and no more than) useful
  • there’s a way to track and author for the combinations of possible state so that the NPCs always have something to say about what they remember and know

There are quite a few technical, design, and writing challenges packed into those three bullet points.

Continue reading “Memory and Knowledge for Characters”

West of Loathing (Asymmetric)

Screen Shot 2018-08-26 at 8.16.18 PM.png

West of Loathing is a fourth-wall-breaking comedy RPG descended from Kingdom of Loathing. The art is made of stick figures. The combat tends toward the gentle side, difficulty-wise. You may spend a lot of your time reaching into disgusting spittoons and harvesting needles from haystacks.

It’s not a genre of game I typically cover here — and I generally don’t have time to play loads of RPGs, which tend to demand a lot more time each than interactive fiction. But there is in fact a certain IF-y sensibility about many of the interactions. You can wander around the world and pick people and things to interact with, but once you do, you enter into a choice-based exchange. These make good use of standard small-scale choice structures: situations where your persistence is tested against assurances that you’re getting into trouble, or where the risk goes up. (You can play poker against other denizens of the town, which is a classic stakes-go-up, are-you-sure-you-stay-in scenario.)

Anyway. I don’t have loads of analysis on this one, and I’m not done with it yet, but it’s entertaining and may appeal to some folks here. (And yes, I know it came out last year. I’m slow.)

World Models Rendered in Text

Last month I wrote a bit about text generation and generated narratives overall. This month, I’ve been looking more at parser games — games that typically are distinguished by (among other things) having an expressive (if not very discoverable) mode of input along with a complex world model.

My own first parser IF projects were very interested in that complexity. I liked the sensation of control that came from manipulating a detailed imaginary world, and the richness of describing it. And part of the promise of a complex world model (though not always realized in practice) was the idea that it might let players come up with their own solutions to problems, solutions that weren’t explicitly anticipated by the author.

It might seem like these are two extremes of the IF world: parser games are sometimes seen as niche and old-school, so much so that when I ran June’s London IF Meetup focused on Inform, we had some participants asking if I would start the session by introducing what parser IF is.

Meanwhile, generative text is sometimes not interactive at all. It is used for explorations that may seem high-concept, or else like they’re mostly of technical interest, in that they push on the boundaries of current text-related technology. (See also Andrew Plotkin’s project using machine learning to generate imaginary IF titles. Yes, as an intfiction poster suggested, that’s something you could also do with an older Markov implementation, but that particular exercise was an exercise in applying tech to this goal.)

There’s a tighter alignment between these types of project than might initially appear. Bruno Dias writes about using generative prose over on Sub-Q magazine. And Liza Daly has written about what a world model can do to make generated prose better, more coherent or more compelling.

Continue reading “World Models Rendered in Text”

Narrative Design for Indies (Edwin McRae)

narrativedesig.jpgNarrative Design for Indies: Getting Started. This is a brief Kindle book published in October of last year.

Edwin McRae is a writer and narrative designer who specializes in indie projects, and has written some blog tutorials and guidance for ink, as well. His book is designed to help aspiring indies figure out what they might need in the area of story, whether they need to hire a writer, and what expectations they should have going into that process.

McRae’s approach is very much conscious of resource constraints. Voiceover is expensive: what can you do without it? What methods of delivering story are affordable and easy to sneak into your story? How can you manifest important story information through gameplay and flavor text that you needed to create anyway?

Continue reading “Narrative Design for Indies (Edwin McRae)”

Venom, Beeswax, Fallen 落葉 Leaves

Having come up with an idiosyncratic terminology (Venom, Beeswax, Mushroom, Salt and Egg) for talking about some aesthetic aspects of procedural literature that matter to me, I now find myself reverting to the same terminology even when talking about other people’s work.

Fallen 落葉 Leaves is a procedurally generated poetry cycle in this year’s IF Comp. It draws on sample texts from Confucian poetry, and combines them and other elements densely, producing couplets with a great deal of strangeness per line. In my terms, it’s therefore heavily applying the principles of Venom (particularity, color, surprise) and Beeswax (varied, allusive, culturally rich source material).

The effect is indeed a bit like reading the translation of something whose metaphors, idioms, and cultural references are outside one’s personal ken:

800px-Shi_Jing.jpg

Some phrases sampled from the Shījīng (詩經),
the Confucian Book of Songs, the Classic of Poetry,
as translated by Arthur Waley. — author’s note for Fallen 落葉 Leaves

To start, you select an adverb from a menu and a verb from another menu; then a poem is generated in couplets, with your adverb and verb plugged into one of the couplets. You may repeat this loop as many times as you like, your adverb and verb changing the contents of the cycle overtly and perhaps also in more subtle ways. The author suggests that a hundred or more moves might be appropriate, and that one might want to pull out specific couplets. Looking at the source code reveals that there are many variables being tracked, perhaps iteratively across repeated builds of the poem.

Because the phrases are so allusive, it is not always easy to extract even a notional meaning from them. More often, I found that I could come up with something but that it was a general rather than a precise interpretation:

You sniff oil — writing home about our walks on the terrace —
Your sailing moon, your arrival — sing my pulse.

The first line is easy enough to imagine: the correspondent stopping mid-letter to breathe in the scent of a perfumed oil, possibly. “Your sailing moon, your arrival” perhaps refer to the time when the lover is to set out and rejoin the poet; “sing my pulse” indicates, presumably, that the poet’s life and heartbeat are in some way responsive to the lover’s movements, or else described by them.

Taken as a whole, though, across multiple sonnets, the experience becomes suddenly Mushroomy: overtly repetitive and generative, not concealing how much it is the result of mechanical operation. The grammar that generates sonnets seems to hit the same major points in each couplet, with allusions to erotic time the lovers spent together in the middle, and then a disagreement (with the player’s adverb/verb choice) toward the end, and the lover departing. (Sometimes on a “well-dressed horse,” which I thought was particularly good.)

Continue reading “Venom, Beeswax, Fallen 落葉 Leaves”

Casual Procgen Text Tools

Last Thursday I was at at the PCG-meets-autotesting unconference at Falmouth, which organized into a bunch of work-groups to talk through ideas related to the conference theme. This was a really fun time, and I am grateful to the organizers and my fellow guests for making it so intriguing.

Our morning work-group started with a suggestion I had: what if there were a casual text-generation tool like Tracery, but that provided a similar level of help in assembling corpora for leaf-level node expansion? What would help new users learn about selecting and acquiring a corpus? What would help them refine to the point where they had something they were happy with using? (And for that matter, are there applications of this that we could see being useful to expert users as well? What could such a tool offer that is currently difficult to do?)

This idea sprang from some of my own discovery that I spend a lot of my own procgen development time simply on selecting and revising corpora. What will productively add to the feel and experience of a particular work, and what should be excluded? How small or large do the corpora need to be? Is there behavior that I can’t enforce at the grammar level and therefore have to implement through the nature of the corpus itself? (I talk a bit about those concerns during my PROCJAM talk (video, slides), especially under the Beeswax category.)

We had a great conversation with Gabriella Barros, Mike Cook, Adam Summerville, and Michael Mateas. The discussion ranged to cover a number of additional possibilities, some of which went considerably beyond the initial “naive user” brief here.

Continue reading “Casual Procgen Text Tools”