Mid-August Link Assortment


August 16 is the next meeting of the Seattle Area IF Meetup.

August 23-24 is Reality Escape Con, an online, free convention about room escape games, organized by the same people who are behind the Room Escape Artist blog.

September 5 is the next meeting of the SF Bay Area IF Meetup.


October 3-4 is the announced weekend for Roguelike Celebration 2020. The event is moving to a virtual model this year. More specifics about the event can be found here.


iftf_logo.pngIFComp 2020 is still accepting entries for another two weeks. Authors should register their intent to enter by September 1. The entries themselves are due September 28.

downloadIntroComp, meanwhile, has already passed the deadline for submissions, but you can play the games and vote on them until August 31.

And inkjam 2020 has just recently concluded, with a number of games written in ink. The winners are still online to play.


Wide Island is a Twine piece by Draconic Chipmunk. It relies heavily on the technique of text expansion: a few linked words will expand into a much longer passage, which itself contains further expansions. Important information about the protagonist, setting, and situation are buried in different parts of the narrative, and the structure means that you might encounter them in any order. The effect reminded me of the telescopic narration in Lime Ergot, where new details constantly encourage the reader to recontextualise what has come before.

I had been reading for several minutes before discovering that the main character was a man with a wife and child. For lack of other information, I’d initially pictured someone demographically more like myself — but that shift felt like an intended part of the reading experience.

There were a few things that surprised me; for instance, several of the links seemed to offer more information on one topic but in fact opened out to talk about something different. But that relative lack of readerly agency also felt appropriate here. Overall, an interesting experiment in hypertext construction. (If this technique interests you, see also stretchtext.)

littlecover.pngThanks to Petter Sjölund, Dan Fabulich, 2lindell, Ed King, jackk225, Kevin Lo, nosferatu-if, OtherOlly, Sabe Jones, Sukil Etxenike, and thehatless, as well as long-term contributions by Dannii Willis and Andrew Plotkin, Counterfeit Monkey is now available in Release 9(!). There is a change log, and the game can be played in-browser complete with the map and all the assorted goodies.

AI and Text Generation

GPT-3 has been available to a limited group of people for a couple of months now, and interesting applications are starting to appear, including a blog that fooled a number of human readers.

Nick Walton, the creator of the AI Dungeon project that used GPT-2, has now set up a GPT-3 version that requires a subscription to access. (Once you have access, you’ll need to go to the settings panel and switch over to the “Dragon” model to activate it, as well.)

On my trials so far, it’s given a reasonably coherent but often sort of conservative performance:

The Dragon model also adds a feature that explicitly logs quests and tracks whether it thinks you’ve completed them or not — more of an attempt to track world state than we saw in the earlier versions of AI Dungeon.

You can also try priming the system with a prompt of your own; it took Counterfeit Monkey in a surprising direction with some torch-carrying sewer-dwellers.

Mailbag: IF for Reinforcement Learning

Hi Emily

I’m a PhD student working with Prof. Mark Riedl at Georgia Tech and Microsoft Research Redmond. I am currently working on making AI agents (specifically using reinforcement learning) that play interactive fiction games (text-adventure games in the vein of Zork) in a non-game specific, generalizable way.

I was advised by Prof. Janet Murray that you would be the right person to help answer a question I had regarding these games, given your expertise in interactive fiction. If you have a list of such games (e.g. those given here https://github.com/microsoft/jericho#supported-games), is it possible to identify a subset of maybe ~10-15 of them that reasonably cover a majority of all interactive fiction games in terms of game structure, i.e. linearity of progression/score accumulation from the perspective of a learning agent? If it is possible, what would this set look like? Any insight at all would be great.

Nice to hear from you — I’ve been keeping an eye on this space as people have been publishing about it recently.

I’m not sure there’s a perfect answer to this, since IF is hugely varied in how it handles world model, score, pacing, etc. Also, your list here skews very much towards early interactive fiction, which means it doesn’t cover some of the formal experiments that came along later.

I also don’t remember how score works in all the games in this list — some of them I’ve not played, or played a long time ago.

However, with that in mind, here are a couple of categories that represent some fairly standard game structures:

Short or medium game in which score is given out rarely — Lost Pig (max 7)

Short or medium game in which score is given more frequently — Meteor etc. (max 30), Balances (max 51)

Long game in which score is distributed fairly frequently throughout — Adventure, Zork; possibly Enchanter and Sorcerer also; Anchorhead, as I recall

And from your list, I recall these being ones that might pose an interesting challenge:

Curses — it’s long, it’s complicated, it does have a scoring system which it doles out gradually, and it also does a trick (if I’m remembering right) where it actually at one point deducts score from the player again. 

Wishbringer — this one’s interesting because there’s a scoring system that reacts to how many times you’ve used the magic stone in the game — so the more you use wishes, the easier the game becomes, but the lower your final score.

Hunter, in Darkness — doesn’t keep score. There’s also a procedurally generated maze in this, which I would expect to make it very challenging indeed.

Thinking about games not on your list, here are some other formal extremes that might be interesting to try to reason about; all of these can be found on https://ifdb.tads.org/ and in most cases they’re available for download.

ASCII and the Argonauts — an intentionally short and simple game that gives a bunch of +1 rewards for doing basic tasks; the relatively small verb set might make it easier than some of the other games.

Aisle — a game that takes one move to play, and for which many different verbs are available; there’s also no score. It’s hard to imagine how one would use reinforcement learning on this, but it represents one extreme that might be valuable for purposes of thought experiment.

Adventurer’s Consumer Guide — as I recall this one gives out a pretty steady stream of +1 point rewards, rather than only a few or only rarer rewards, so it might be a nice counterpoint to some of the others.

Savoir-Faire — a game of mine, and I suggest it just because I happen to know it well enough to know how the rewards work; there are frequent opportunities for scoring and some rewards are bigger than others.

Bronze — a game that I wrote that keeps track of how many rooms you’ve explored and triggers certain narrative events when you’ve found more of the space, so you could use the explored-rooms count as a secondary signal to score and probably get some useful reinforcement out of that aspect as well.

Captain Verdeterre’s Plunder — gives you a score based on how much loot you managed to rescue off a sinking ship before it goes under. Genuinely an interesting optimization problem; human players have competed to try to come up with the highest-score possible traversal.

Journey to Alpha Centauri in Real Time — as the name would suggest, this takes place over a certain amount of elapsing real time and therefore it’s not possible to finish, because it’s representing a very long journey in space.

Rematch — a game in which the challenge is to figure out a single very long command that will solve the game in a single move, and in which there is a cyclical pattern to the initial world set-up. (I think this one is not a z-machine game, so it might not work with Jericho.)

Zero Sum Game — starts with a score and counts down to zero (but this may be less interesting than the others since you could just reverse the sign of the signal and wind up with something equally valid).

Hadean Lands — fiendishly hard puzzle game, in which instead of score you’re gaining access to lots of objects which could arguably be used as a proxy for progress. Also features areas where the player has to do similar things in slightly different ways.

Can AI tell a good story?


Tuesday I was invited to speak at the interactive narratives summit at the London Games Festival, specifically in a debate over whether AI can create a good story.

Perhaps the original scheme was to start a good showdown, but I have somewhat complicated views about what the question even means, and my would-be debater Brenden Gibbons did also, as it happens. So instead we had a more temperate but I think more interesting conversation, moderated by David Tomchak.

This is not a transcript of that conversation, because I can’t do that, but it’s an attempt to recapture some key points, drawing also on notes I made before the event, and expanding some of the ideas with links or examples I didn’t have available in the room.

First, AI can definitely already create stories, by pretty much any definition that a narratologist would establish. Indeed, we can set the bar higher than just “is there a sequence of causally-linked events,” though many scholars would accept that as enough. Some of GPT-2’s output is interesting, funny, and narrative. So are the outputs of other techniques stretching back to the 70s, from generative grammars to the model-and-curate approach used by James Ryan in his recent dissertation Curating Simulated Storyworlds. If AI were an orchard, we would have already plucked many and diverse story fruits there.

Continue reading “Can AI tell a good story?”

Mailbag: AI Research on Dialogue and Story Generation (Part 3)

This is a continuation of an earlier mailbag answer about AI research that touches on dialogue and story generation. As before, I’m picking a few points of interest, summarizing highlights, and then linking through to the detailed research.

This one is about a couple of areas of natural language processing and generation, as well as sentiment understanding, relevant to how we might realize stories and dialogue with particular surface features and characteristics.

Transferring text style

Screen Shot 2018-08-25 at 2.27.27 PM.png

Style transfer is familiar in image manipulation, and there are loads of consumer-facing applications and websites that let you make style changes to your own photographs. Textual style transfer is a more challenging problem. How might you express the same information, but in different wording, representing a different authorial manner? Alter the sentiment of the text to make it more positive or negative? Translate complex language to something more basic, or vice versa? Capture the distinctive prose characteristics of a well-known author or a specific era? Indeed, looked at the right way, translation from one human language into another can be regarded as a form of style transfer.

Continue reading “Mailbag: AI Research on Dialogue and Story Generation (Part 3)”

Mailbag: Research on Dialogue and Story Generation (Part 2)

This is a continuation of an earlier mailbag answer about research that touches on dialogue and story generation. As before, I’m picking a few points of interest, summarizing highlights, and then linking through to the detailed research. In this section, I’m mostly looking at authoring tools and at academic theoretical work on interactive narrative.

This will not be comprehensive.

Authoring Tools for Dynamic or Procedural Storytelling

Several academic projects focus on building authoring tools for various types of dynamic or procedural storytelling, whether or not those are heavily augmented by AI. Many of these don’t rely on machine learning per se but do explore some other aspect of  the problem; in particular, several attempt to furnish the author with the means to build content for a planner-based storytelling system. But there’s a whole range of functionality here (and this is not a complete list):

Screen Shot 2017-04-23 at 9.26.07 PM

Andrew Gordon has done quite a bit of work around tools designed to assist authors with story creation ideas based on large corpora. I’ve written elsewhere about DINE, his interactive story authoring tool. DINE allows authors to describe the sorts of prompts that they want to understand, but uses its own models of language to determine whether a player’s input qualifies as matching a prompt. The result is less controllable but sometimes more robust than a standard interactive fiction parser. (“Sometimes” is the key word in that sentence.)

Screen Shot 2018-08-26 at 9.39.39 PM.png

Emma’s Journey is a project out of UCSC that combines fragments of choice-based narrative with a planner to create dynamic scenes. Individual pieces feel like they could have been done in Twine, but the selection and ordering of pieces is very dependent on current stats; and there is a distracting minigame for the player that also affects what options are available. This is built with the experimental StoryAssembler tool. There are also several associated research papers.

Continue reading “Mailbag: Research on Dialogue and Story Generation (Part 2)”

Mailbag: AI Research on Dialogue and Story Generation

I’m curious: do you follow much research that happens in stories and dialog these days? In the world of machine learning research, there’s much less in dialog and stories than other areas (e.g. image generation/recognition or translation), but once in a while, you come across some interesting work, e.g. Hierarchical Neural Story Generation (by some folks in Facebook AI).

For some years now I’ve followed work coming out of the UCSC Expressive Intelligence Studio; work done at Georgia Tech around crowdsourced narrative generation; game industry applications introduced or covered at the GDC AI Summit (though it is rarer to see extensive story-generation work here). I’ve also served on the program committees for ICCC and ICIDS and a few FDG workshops; and am an associate editor on IEEE Transactions on Games focused on interactive storytelling applications. Here (1, 2, 3) is my multi-part post covering the book Interactive Digital Narrative in detail.

That’s not to say I see (or could see) everything that’s happening. I tend to focus on things that look most ready to be used in games, entertainment, or chatbot applications — especially those that are designed to support a partially human-authored experience. I also divide my available “research” time between academic work and hands on experiments in areas that interest me.

So with that perspective in mind:

  • I’m not attempting a comprehensive literature review here! That would be huge. This coverage cherrypicks items
  • I will go pretty lightly on the technical detail since the typical readership of this blog may not be that interested, but I’ll try to provide summary and example information that explains why a given item is interesting in my opinion, and then link back to the original research for people who want the deeper dive
  • I’ll actually start by summarizing a bit the paper the questioner linked
  • Even with cherrypicking, there is a lot to say here and I am breaking it out over multiple posts

That Initial Paper

For other readers: the linked article in this question is about using a large dataset pulled from Reddit’s WritingPrompts board and a machine learning model that draws on multiple techniques (convolutional seq2seq, gated self-attention). After training, the system is able to take short prompts and create a paragraph or so of story that relates to the prompt. Several of the sample output sections are quite cool:

Screen Shot 2018-08-25 at 1.41.40 PM.png

But they are generating surface text rather than plot, and the evidence suggests that they would not be able to produce a coherent long-term plot. Just within this dialogue section, we’re talking about a tablet-virus-monster object, and we’ve got a couple of random scientist characters.

Continue reading “Mailbag: AI Research on Dialogue and Story Generation”