The Uncanny Deck: Co-authoring with GPT-2

Artbreeder landscape, developed as an image of Booknesford, from Annals of the Parrigues. Angry citizens threw the last of the Parrigues from the cliff in order to execute her.

Many years ago, I started writing a fantasy story. In the story, there was a culturally important game you could play with friends, which was usually mostly a bit like poker.

The thing was, every once in a while you would draw some totally weird extra card that had never been in the pack before. The Steward of Hearts. The King of Arrows. Both suits and ranks were open to change.

In the story, this was the work of prophetic spirits.

I never finished writing the story because I was really less interested in the plot than I was in the deck itself — the idea of a set of symbols that was mostly known and constrained and human-made, but had an occasional dose of the uncanny.

To me, that dose of the uncanny is also part of the appeal of working with AI — the way it can, at its best, introduce elements that feel both significant and unexpected.

Over the past year or so, I’ve worked on and off on making a text generator that describes fortune-telling cards; where the deck has its own definite imagery and set of meanings; where the generator usually stays approximately on form; but where you sometimes find a card you would not expect at all.

The rest of the article goes a little bit into what I’ve done, what it produces, and why I find this an interesting way to write with a machine.

If you’d like your own experimental output from it: through this weekend (until early July 19), I’m supporting this fundraiser by generating tarot card readings or new Parrigues-style towns with this generator. If you’d like your own, donate any amount, then ping to let me know what you’d like. (More about that offer on Twitter.)

Now, the article:

Concept art for the Duchess of Salt, as rendered with ArtBreeder. Salt is associated with age, preservation, archiving, patterned work; while the Duchess is associated with the Venom principle.

This starts with the Parrigues Tarot. Initially I was just using a tagged generative grammar like the one I used on Annals of the Parrigues, but this time I was producing described number and face cards (Ace to Seven; Duke, Duchess, Ducal Personage, Innkeeper, and Ghost) in five suits (Salt, Venom, Beeswax, Mushroom, and Egg). The Major Arcana, meanwhile, were much more hand-authored. Thematically, it was sharing the ideas from Annals of the Parrigues, because I find those ideas useful to think with.

Definitely not a conventional Tarot deck even at the outset, but I did get the grammar to the point where it was building output I enjoyed and found interesting on a regular basis.

At some point, though, I definitely reached the point where the grammar development approach felt limited. It was a good bit of work to add any more new elements to the deck’s vocabulary — and I increasingly felt like I was fiddling around the edges of the system anyway, adding bits and bobs that would only rarely be seen.

I wanted to expand the expressive capacity of the system in directions that would be more surprising and less effortful. And I also wanted the project to acknowledge the way that these iconographic systems are never just one person’s invention.

CLIP image generator attempting to illustrate a card about a person wearing a wreath of rosemary.

So this was the point at which I turned to machine learning models.

What I wanted out of the system wasn’t necessarily an easy fit for GPT-2, on the face of it. Language models that are heavily trained on loads of internet text tend to be good at quickly learning certain kinds of question-answering tasks and related challenges, where you’re giving them a pattern that they can reproduce and flesh out based on their internalised sense of the English language (and whatever facts they have embedded in there).

That meant that I could fairly easily train a GPT model to create outputs that structurally resembled the card-description pattern I had: ask for an image of what it looked like, and some lines of interpretation about it. And doing that might even cause the system to use some of its own prior training about Tarot — of which there is clearly quite a bit.

But it takes more training data to tune it to remember deeper associations and themes — like “salt is associated with archiving” or “venom is associated with drugs and bright colours.” It would have been a demanding exercise to write enough training data by hand to communicate all that — but, of course, the grammar generator meant I had training data available on demand.

To reinforce the theme ideas, but keep the generator from getting too completely stuck on the standard cadences of the Parrigues Tarot, I also fed it the entire text of Annals of the Parrigues as well.

The resulting generator frequently makes things that look a lot like the output of my grammar-based generator, like:

Five of Eggs: A figure no more than a quarter of the way up a vast staircase that ascends with endless zigzags through parklands and follies and abandoned statues and the ruins of churches and then to graveyards and then into copses where the old gods were worshipped and then into a rockbound wilderness. The fiture is looking for the President of the World.

GPT generator on the Five of Eggs

But sometimes it will spontaneously pop up a combo of rank and suit that don’t exist in the original — just as I hoped from the fantasy version of the deck. “Ghost” is a rank in the original corpus, but “Butterflock” is not a suit there:

Ghost of Butterflock: The Ghost of Butterflock represents a threat to the Querent that has yet to be addressed. It may be a living bein gthat has a permanent and irrevocable impact on the way the world is, or whether that worls is possible at all.

GPT generator with a Ghost card… but not in any known suit

Because it’s an ML model it can also be prompted for things like that explicitly — I can ask it for a description of, say, The Six of Lollipops and see what happens. (The same applies to new town names, if I want to get back a description of a Parrigues-style town.)

Purely as a side-effect, because the text of Annals of the Parrigues was included in training, it can also produce descriptions of towns — though again sometimes straying from the constraints of the original grammar. For instance, the original Annals of the Parrigues doesn’t have a “Children” category. It does have a New Conniswittle town, but the GPT-based model has taken a disturbing twist on that, by adding a section on the local child population:

New Conniswittle: The Sun Lies Black on the Ocean — the Calm Before the Storm

New Conniswittle lies on the very edge of the ocean, deep in the Isthmus of Garonne. The population are spread between two and three villages, living side by side in the same houses…

Children: If one gets too close to a single, very tiny child, the child will immediately seize on the offending individual, and force them away from the source of the sun. Avoid such encounters.

GPT generator, asked for a town description

This model also allows for a different cadence of co-authorship than the grammar generator of Annals of the Parrigues. There, if I wanted to add somethng to the text, I needed to put new nodes and expansion structures into the grammar, and then see what they would do — a loop requiring some minutes of work. And if the system created a text I almost liked, there wasn’t really any way to edit or fix it.

The GPT-model, by contrast, affords two kinds of interaction with its output, one much faster and the other much slower. The slower interaction is to re-train the entire model to incorporate new data; that’s necessary if I want to, for instance, introduce a new common feature of its card descriptions. If, say, I wanted card descriptions to potentially include a paragraph about the card’s age and previous owners, that would need new data and retraining — and depending on how I approached it, that data might need to first be introduced to the generative grammar, then a bunch of generated examples produced, then those examples fed into the training process.

The fast iteration, though, is much more like interact with a human writer who just needs a lot of editing. GPT produces new text in response to a prompt, a few lines or a paragraph of text. It then adds more text that it finds most probable to follow from that introduction.

That means that if the GPT produces something I almost like, but not quite, I can keep the bit I did like and use it as a prompt again, perhaps with a new random seed.

Interacting with it that way gets me to examples like this, where the italic text is generated by the model, and the non-italic text is added by me to prompt additional information. The model introduces the ideal of the Fancy Dining restaurant that will make use of guest-provided foodstuffs; I responded in a way that prompted an example.

The model suggests that the tuna fish is displayed on a pedestal for the Duchess of Unthinking Insects; I think it was not unreasonable of me to wonder who exactly she might be.

Lower Silesse
Coffee and Salt — The Spokesman’s Ladder…

Fancy Dining: The food in this place is bad, but the proprietors at the restaurant Giant Hokkaido Pork appear delighted to make use of the amenities that we bring to their tables.

One day we asked them to prepare a freshly-caught tuna for consumption with chocolates. After that the tuna would not die of asphyxiation, so the chef decided to display it on a pedestal in honor of the Duchess of Unthinking Insects.\footnote{In local belief, the Duchess of Unthinking Insects is associated with celibacy and cold indifference to procreation. We suppose that the Duchess must be to blame for the delicacy of these tuna preparations, as they were made with methods that make the tuna unreasonably fresh.} With the pedestal of tuna displayed at the centre of the table, we instead dined on dried mouse meat and putrescent root.

GPT generator, with some very strange ideas about fine dining

Drilling into the more surprising output like this is fun, and produces text sequences that are very unlike the tuning data. But it also gets further from what the model was trained to do. If I want the system to be able to consistently produce a lot of commentary on specific-yet-Parrigues-appropriate dining experiences, I’d eventually want to re-train it.

So interacting with this model as a collaborator is all about pushing it just to the edge of its comfort zone. And asking just what it meant about those Unthinking Insects.

3 thoughts on “The Uncanny Deck: Co-authoring with GPT-2”

Leave a comment