This is a follow-on answer to a previous mailbag post, specifically the part in which the questioner asks,
Would you have any thoughts on how to… improve the adaptive skills needed for bringing IF to newer formats and into audio?
I take this to mean not “how do I port an existing work to an interactive format” (which is also an interesting question), but “how do I do IF-like interactivity in formats other than text, especially audio?”
Key challenges for this, in my experience, center on these areas:
Output. Adapting how you communicate events (what just happened), state (what is true about the world or story currently, e.g. inventory or quest goals), and affordances (what can the player do now).
Different media offer different grouping of senses. For example, an audio piece lets you communicate events as they happen, but may not offer a secondary channel to communicate state at all — or may allow you to rely on some background noise or procedural music to help remind the player of a little bit of ambient information. A console game by contrast might use some visual filters to change how the scene looks when the protagonist is low on health — which means that medium can keep the health issues constantly in mind.
These differences can have a lot of knock-on effects to player experience. If the player is only going to be able to understand and manipulate the narrative in a satisfactory way if they’ve got access to a quest journal and a map, then interactive audio is the wrong medium for that story, and you may find you need to make pretty substantial alterations to the concept.
Input. How is the player communicating their choices, and what range of options does that realistically give you? Not All Choice Interfaces Are Alike is an article of mine that looks at some of these distinctions. The “Pressure” category is especially important in cross-medium adaptations: interactive film doesn’t always hold its pacing well if the player doesn’t make choices relatively quickly, which suggests using simple, consistent interfaces on a timer rather than complex choices that the player might need to re-learn each time. (But that’s just one possible solution.)
And, of course, study what, if any, work has already been done in the format you’re considering. Even for (relatively) newly created spaces such as VR and augmented reality, there’s some work out there that you can investigate. What worked? What’s been clumsy or unsuccessful?
For audio in particular, the odds are fair that you’re building for a platform like Alexa that uses spoken input, so there it’s useful to look at how other Alexa skill creators (even outside the narrative space) deal with cluing the user about options; responding constructively when the spoken input was not understood; and allowing for always-on default commands like “move on” or “next” to have some persistent functionality. Rosina Sound has done a fair amount of work in this area, and this talk by Nicky Birch covers some of the challenges.
State tracking. Some platforms really constrain how much world state you can preserve, which means that some IF methods aren’t feasible. Conversely, people who are brought in to write for interactive pieces in other media may have come from film, TV, radio, etc. I’ve had the pleasure of working with some terrific people who came out of conventional media, on projects like this. Often, there’s a bit of a learning curve to help them understand what interactivity might look like when it’s not just basically a CYOA book. It’s reasonable to expect that you may need to keep things simple, in this context — use variables sparingly.
Resource limits. Costs per branch of content go up if everything you add to the story has to be acted by live actors on a stage set for recording. This is the same set of considerations you’d get with AAA games, but the exact profile of the problem varies depending on the specific medium.
To create the desired amount of variation in the experience with limited resources, look at how you can break out the experience into multiple channels. Maybe you use the same film, but swap out the soundtrack to give it different significance in different situations. Maybe you have two audio tracks, one containing dialogue and the other containing the increasingly urgent ticking of the bomb, so that the conversation could happen in different orders but preserve the pacing indication.
Also, be aware of how resource limitations are going to affect the process of creation. If you’re used to text games, you may be accustomed to iterating a lot on the flow of text to get it right. Once you’ve spent a lot filming content, iteration becomes way more painful. Here you can do a certain amount with mockups, building Twine versions of the story to play through before you record something, etc.
At the same time, pay attention to how veterans of your other medium traditionally approach this problem. The idea of doing a table read for an interactive fiction might seem foreign, but it might be the right idea for an interactive piece that’s going to be acted.
Here are this blog’s archives on interactive film, audio, and theatre, though obviously I haven’t covered nearly everything in this space. I also recommend Brian Moriarty’s I Sing the Story Electric for coverage of some interactive multimedia projects that predate computer IF.