This past weekend was NarraScope, a genuinely excellent conference about interactive storytelling in many shapes and forms. It was fantastic, and my one complaint is that there was so much good content that I was forced to miss a lot of things I would have liked to see.
(I livetweeted as much as I could, and I’m grateful to other attendees who did the same from other talks. The #NarraScope stream on Twitter contains a lot of notes about all the things discussed there.)
Graham spoke about Inform 7’s current state of progress, and for those who either weren’t able to attend NarraScope at all, or who chose to do one of the other excellent things going on at the same time, we’ve posted the slides and notes from that talk.
And if you’re curious about the previous time he spoke about I7: https://emshort.blog/2018/06/10/notes-on-the-direction-of-inform/
Inform 7 is used in a number of contexts that may be slightly surprising to its text adventure fans: in education, in prototyping game systems for commercial games, and lately even for machine learning research.
TextWorld: A Learning Environment for Text-Based Games documents how the researchers from Tilburg University, McGill University, and Microsoft Research built text adventure worlds with Inform 7 as part of an experiment in reinforcement learning.
Reinforcement learning is a machine learning strategy in which the ML agent gives inputs to a system (which might be a game that you’re training it to play well) and receives back a score on whether the input caused good or bad results. This score is the “reinforcement” part of the loop. Based on the cumulative scoring, the system readjusts its approach. Over many attempts to play the same game, the agent is trained to play better and better: it develops a policy, a mapping between current state and the action it should perform next.
With reinforcement learning, beacuse you’re relying on the game (or other system) to provide the training feedback dynamically, you don’t need to start your machine learning process with a big stack of pre-labeled data, and you don’t need a human being to understand the system before beginning to train. Reinforcement learning has been used to good effect in training computer agents to play Atari 2600 games.
Using this method with text adventures is dramatically more challenging, though, for a number of reasons:
- there are many more types of valid input than in the typical arcade game (the “action space”) and those actions are described in language (though the authors note the value of work such as that of BYU researchers Fulda et al in figuring out what verbs could sensibly be applied to a given noun)
- world state is communicated back in language (the “observational space”), and may be incompletely conveyed to the player, with lots of hidden state
- goals often need to be inferred by the player (“oh, I guess I’m trying to get that useful object from Aunt Jemima”)
- many Atari 2600 games have frequent changes of score or frequent death, providing a constant signal of feedback, whereas not all progress in a text adventure is rewarded by a score change, and solving a puzzle may require many moves that are not individually scored
TextWorld’s authors feel we’re not yet ready to train a machine agent to solve a hand-authored IF game like Zork — and they’ve documented the challenges here much more extensively than my rewording above. What they have done instead is to build a sandbox environment that does a more predictable subset of text adventure behavior. TextWorld is able to automatically generate games containing a lot of the standard puzzles:
Yesterday Graham Nelson spoke to the Oxford/London IF Meetup about what he has been doing with Inform recently, what has and has not been successful (in his view), and where the project is going next. The slides and text are now available here.
Because the talk is intended for a mixed audience, it does explain some things that readers of this blog may already know well, but by the later part of the talk is introducing some new elements.
Vorple’s functionality goes beyond the (still quite cool) work that furkle did to support the front end of Worldsmith. A version of Vorple has been around for some time, and the prototypes for it existed as far back as the IF Demo Fair, but what’s been available so far has been compatible only with the Z-machine, a format so small that it’s increasingly hard to generate any viable Z-machine games with Inform 7 at all. Meanwhile, Hugo Labrande has maintained a Vorple version suitable for use with Inform 6.
There are some extra details available at the announcement post here.
The new edition of Vorple opens the following possibilities for games that are being played online or in a browser (which, these days, is more and more of them):
- Authorial control over fonts and typography on a level that has generally been difficult or impossible
- Hypertext games programmed and driven through Inform, something that was previously possible but tended to come out looking rather clunky
- Parser IF that makes attractive, dynamic use of illustrations, maps, and even videos
- Games that remove text after it has already been printed to the screen (something that was just about impossible with former non-Vorple Inform interpreters); this means that one can, among other things, remove error messages from the scrollback, or change the game’s printed history to reflect changes in the protagonist’s mentality
- Tooltips and modal dialogue boxes to do things like offer definitions or confirm player choices outside the main narrative
- Help menus other than the horrid nested, keypress-driven things we’ve been suffering with since 1994
- Probably many other things I have not thought of yet.
I’ve had the chance to play with the extension set as Juhana tested it for release, and it is really cool.
In addition, those in range of London are welcome to join us for the IF Tools meetup May 31, where Juhana will Skype in to talk to us about the Vorple project, so those interested can get a first-hand introduction.
The following was posted as a comment on another post to which it was not related; I’ve removed it from there to space of its own. It concerns the status of Inform 7.
I’ve set aside a couple of days this week to work on a problem that I understand to be a concern for a number of authors, namely that there are various very useful Inform extensions that have never been updated to be compatible with 6L38.
I’ve already made updates to the following, so that they either already have been or soon will be submitted to the Inform Public Library (the repository that holds 6L38-compatible extensions and allows them to be installed and managed directly within the IDE, though you can also browse it less attractively online):
Simple Graphical Window
Consolidated Multiple Actions (being beta-tested before submission, as it’s complex)
Hypothetical Questions (also being beta-tested)
I’ve also done some minor tweaks to Postures and Approaches, both of which had been updated but were presenting difficulties for some users.
It’s hardly practical to try to address all extensions that aren’t yet converted, but if people have particular requests that are serious stumbling blocks for them, then please let me know and I’ll try to fold these into my schedule. I’d like to direct my attention especially towards things that either a) someone is using and the need for that extension is preventing them from updating to the current Inform version or b) someone would really like to use in a current 6L38 project, but can’t. (As opposed to “I was browsing the extensions website and this old extension looked kind of interesting in theory even though I have no actual plans to run it at the moment and so far as I know neither does anyone else.”)
If it helps, the older extensions site is here. You may also like to look at a description of how to do basic updates for 6L02 (which usually also works for the current version 6L38) or forum thread explaining how extension filing works.
I should also add that we have volunteers working on a system for a more systematic storage of multiple extension versions, etc.