Iterating San Tilapian Studies

Over the years since I first ran it, I’ve made a few iterations of my narrative (large-)party game San Tilapian Studies. It’s also been run by other people with their own versions of the stickers and text, and even been made available to the public at the Wellcome Play Spectacular.

In 2015, I offered a version of it as an IF Comp prize, and the winner requested that I adapt it to a particular fantasy world idea he had in mind. That version was mechanically pretty similar to the original — partygoers are given stickers with nouns or descriptive phrases on them, and must find other participants that are a good match for them in order to make a three-segment-long description of an object. Then they make up some backstory about that object and write it in the book.

The main tweaks I made were:

  • the game package also included a fantasy outline map that players could use to locate places in their game world
  • the sticker collection included a few wild-card words, words that could be played as either a noun or an adjective (like “bone” or “mirror”). The intent there was to help keep things moving smoothly if people felt they were running out of plausible matches

This year I offered the San Tilapian Studies kit again. One of the winners requested the original version, which is cool — it’s fairly easy for me to do that, and I think the Ruritanian romance setting provides a lot of room to imagine different miniature plot lines: love stories, politics, military affairs, interpersonal intrigue, etc., all fit into that world.

The other winner asked for a Lovecraftian re-skin of the set, and that was a bigger revision.

The rest of this post is about the design work I did to adapt San Tilapian Studies, first for a minor upgrade and second for a larger re-skin. While the result isn’t played on a computer, this is an exercise in developing a corpus for procedural use.

Designing, testing, and editing a corpus is in fact a significant part of the work in any procedural text project — so while this is a bit different from, say, Annals of the Parrigues in that it doesn’t result in a finished single text, a lot of related issues come into the corpus design.

Continue reading “Iterating San Tilapian Studies”

MEXICA (Rafael Pérez y Pérez)

MexicaCover.jpg

MEXICA: 20 Years — 20 Stories is a book of twenty machine-generated stories plotted by the program MEXICA. These have been rendered into natural language, both in English and in Spanish, by the machine’s designer Rafael Pérez y Pérez.

The stories concern characters of Aztec legend: a jaguar knight, a princess, a warrior, a lady, the god Huitzilopochtli. They fall in love, they get into fights, they pursue or evade one another, they run away to hide beside the volcano Popocatépetl. They experience subtler things, too: lust, jealousy, inner conflict, mingled love and hate, even embarrassment at having been inconsistent.

The stories were produced as plot descriptions by the machine, and their human readable descriptions are both translations of those plots — so the Spanish is not a translation of the English, or vice versa. There are, I’m told, subtle differences between the Spanish and English versions of the stories, though my Spanish is not good enough to appreciate this deeply. But even from the most basic reading Spanish, it is clear that sometimes elements are named differently in the two versions.

At the same time, the versions are narrated in a way that retains the evidence of their machined nature. For instance, here is a passage:

The princess was a proud native of the Great Tenochtitlán City.

The competition between the princess and the eagle knight had reached levels of strong animosity.

Quickly, the princess and the eagle knight were immersed in a fight.

This is perfectly readable English, but it suggests an outline for a longer piece, a short story or even a novel, in which these things we are being told are rendered more fully. Performing that more extensive rendering, though, would have concealed what exactly the machine was doing — and it is extremely interesting to have this clearly evident. Besides, as the afterword indicates, we might consider that writing in this way is simply the style and voice of MEXICA the generator.

Continue reading “MEXICA (Rafael Pérez y Pérez)”

Parrigues Tarot (draft)

I’ve tweeted a bunch about this project, but talked about it less on my blog: for a while I’ve been working on a followup to Annals of the Parrigues called Parrigues Tarot, a system that generates tarot card descriptions like these:

Screen Shot 2018-06-27 at 12.53.28 AM.png

Though it might seem similar, this was actually a much more difficult and extensive project than Annals of the Parrigues, for several reasons.

First, it’s doing more work with narrative arcs. Many (though not all) cards are built around the idea that the protagonist wants something and takes some action to try to achieve it (or else fears something and takes action to try to avoid it). The chosen action either succeeds or fails, with some results.

Not all of that story has to be represented explicitly in the card description: indeed, the fact that these are nominally descriptions of static images was a little limiting, because I tried to only describe things where an action and its consequence could be pretty clearly implied visually. Meanwhile, sometimes the story can include additional information — why does the protagonist need money in the first place, for instance? But centering the story generation on an action and a desired outcome gave the output more consistent narrative potency than various other constructions I tried.

Finally, the system uses much less random content and requires much more salience of its symbols: if it mentions a lion, or diamonds, or the color blue, there’s usually some underlying tagging that makes those elements relevant to the meaning of the card.

The system is also able to do some quirky variations, like “find a node expansion that matches the current world state except it should be opposed along one axis” — useful for finding an opposite for something already pictured: a thief to go up against a virtuous judge, say.

But maybe the biggest difference is simply down to the nature of the medium. The Annals are, and are expected to be, repetitive, with modest amounts of new information per entry. A tarot card is expected to be compact and evocative with high information density.

Continue reading “Parrigues Tarot (draft)”

World Models Rendered in Text

Last month I wrote a bit about text generation and generated narratives overall. This month, I’ve been looking more at parser games — games that typically are distinguished by (among other things) having an expressive (if not very discoverable) mode of input along with a complex world model.

My own first parser IF projects were very interested in that complexity. I liked the sensation of control that came from manipulating a detailed imaginary world, and the richness of describing it. And part of the promise of a complex world model (though not always realized in practice) was the idea that it might let players come up with their own solutions to problems, solutions that weren’t explicitly anticipated by the author.

It might seem like these are two extremes of the IF world: parser games are sometimes seen as niche and old-school, so much so that when I ran June’s London IF Meetup focused on Inform, we had some participants asking if I would start the session by introducing what parser IF is.

Meanwhile, generative text is sometimes not interactive at all. It is used for explorations that may seem high-concept, or else like they’re mostly of technical interest, in that they push on the boundaries of current text-related technology. (See also Andrew Plotkin’s project using machine learning to generate imaginary IF titles. Yes, as an intfiction poster suggested, that’s something you could also do with an older Markov implementation, but that particular exercise was an exercise in applying tech to this goal.)

There’s a tighter alignment between these types of project than might initially appear. Bruno Dias writes about using generative prose over on Sub-Q magazine. And Liza Daly has written about what a world model can do to make generated prose better, more coherent or more compelling.

Continue reading “World Models Rendered in Text”

Procedural Generation in Game Design

Screen Shot 2017-06-08 at 7.10.14 AM.pngProcedural Generation in Game Design is out! Kate Compton of Tracery fame writes about generative art toys; Mike Cook (PROCJAM, Games by Angelina) writes about ethical generation and also about the procedural generation of game rules; Harry Tuffs (A House of Many Doors) writes about procedural poetry generation. Jason Grinblat and Brian Bucklew (Caves of Qud) each have a chapter. Gillian Smith (Threadsteading, plus lots of cool research) writes about evaluating and understanding what’s been generated. Ben Kybertas (Kitfox Games) covers procedural story and plot generation.

The whole volume is edited by Tanya X Short (Moon Hunters) and Tarn Adams (Dwarf Fortress). And I am leaving out a lot of cool people and chapters here, but you can check out the full table of contents on the website.

My contribution — drawing on experiences from Versu, my character-based parser IF, and assorted other projects — is a chapter on characters: how generating dialogue and performances can help realize an authored character; approaches to generating characters; considerations about what is even interesting to auto-generate.

*

And in a related update to a previous post: I’m happy to say that the PROCJAM Kickstarter has succeeded and has now put out a call for artists to make art packs for procedural work, together with a call for tutorial authors. If their funding goes even higher, they’ll be able to commission two art packs; translate the tutorials they build into additional languages; and hit some other cool stretch goals.

Venom, Beeswax, Fallen 落葉 Leaves

Having come up with an idiosyncratic terminology (Venom, Beeswax, Mushroom, Salt and Egg) for talking about some aesthetic aspects of procedural literature that matter to me, I now find myself reverting to the same terminology even when talking about other people’s work.

Fallen 落葉 Leaves is a procedurally generated poetry cycle in this year’s IF Comp. It draws on sample texts from Confucian poetry, and combines them and other elements densely, producing couplets with a great deal of strangeness per line. In my terms, it’s therefore heavily applying the principles of Venom (particularity, color, surprise) and Beeswax (varied, allusive, culturally rich source material).

The effect is indeed a bit like reading the translation of something whose metaphors, idioms, and cultural references are outside one’s personal ken:

800px-Shi_Jing.jpg

Some phrases sampled from the Shījīng (詩經),
the Confucian Book of Songs, the Classic of Poetry,
as translated by Arthur Waley. — author’s note for Fallen 落葉 Leaves

To start, you select an adverb from a menu and a verb from another menu; then a poem is generated in couplets, with your adverb and verb plugged into one of the couplets. You may repeat this loop as many times as you like, your adverb and verb changing the contents of the cycle overtly and perhaps also in more subtle ways. The author suggests that a hundred or more moves might be appropriate, and that one might want to pull out specific couplets. Looking at the source code reveals that there are many variables being tracked, perhaps iteratively across repeated builds of the poem.

Because the phrases are so allusive, it is not always easy to extract even a notional meaning from them. More often, I found that I could come up with something but that it was a general rather than a precise interpretation:

You sniff oil — writing home about our walks on the terrace —
Your sailing moon, your arrival — sing my pulse.

The first line is easy enough to imagine: the correspondent stopping mid-letter to breathe in the scent of a perfumed oil, possibly. “Your sailing moon, your arrival” perhaps refer to the time when the lover is to set out and rejoin the poet; “sing my pulse” indicates, presumably, that the poet’s life and heartbeat are in some way responsive to the lover’s movements, or else described by them.

Taken as a whole, though, across multiple sonnets, the experience becomes suddenly Mushroomy: overtly repetitive and generative, not concealing how much it is the result of mechanical operation. The grammar that generates sonnets seems to hit the same major points in each couplet, with allusions to erotic time the lovers spent together in the middle, and then a disagreement (with the player’s adverb/verb choice) toward the end, and the lover departing. (Sometimes on a “well-dressed horse,” which I thought was particularly good.)

Continue reading “Venom, Beeswax, Fallen 落葉 Leaves”