Mailbag: Adapting IF Skills to Adjacent Media

This is a follow-on answer to a previous mailbag post, specifically the part in which the questioner asks,

Would you have any thoughts on how to… improve the adaptive skills needed for bringing IF to newer formats and into audio?

I take this to mean not “how do I port an existing work to an interactive format” (which is also an interesting question), but “how do I do IF-like interactivity in formats other than text, especially audio?”

Key challenges for this, in my experience, center on these areas:

Output. Adapting how you communicate events (what just happened), state (what is true about the world or story currently, e.g. inventory or quest goals), and affordances (what can the player do now).

Different media offer different grouping of senses. For example, an audio piece lets you communicate events as they happen, but may not offer a secondary channel to communicate state at all — or may allow you to rely on some background noise or procedural music to help remind the player of a little bit of ambient information. A console game by contrast might use some visual filters to change how the scene looks when the protagonist is low on health — which means that medium can keep the health issues constantly in mind.

These differences can have a lot of knock-on effects to player experience. If the player is only going to be able to understand and manipulate the narrative in a satisfactory way if they’ve got access to a quest journal and a map, then interactive audio is the wrong medium for that story, and you may find you need to make pretty substantial alterations to the concept.

Input. How is the player communicating their choices, and what range of options does that realistically give you? Not All Choice Interfaces Are Alike is an article of mine that looks at some of these distinctions. The “Pressure” category is especially important in cross-medium adaptations: interactive film doesn’t always hold its pacing well if the player doesn’t make choices relatively quickly, which suggests using simple, consistent interfaces on a timer rather than complex choices that the player might need to re-learn each time. (But that’s just one possible solution.)

And, of course, study what, if any, work has already been done in the format you’re considering. Even for (relatively) newly created spaces such as VR and augmented reality, there’s some work out there that you can investigate. What worked? What’s been clumsy or unsuccessful?

For audio in particular, the odds are fair that you’re building for a platform like Alexa that uses spoken input, so there it’s useful to look at how other Alexa skill creators (even outside the narrative space) deal with cluing the user about options; responding constructively when the spoken input was not understood; and allowing for always-on default commands like “move on” or “next” to have some persistent functionality. Rosina Sound has done a fair amount of work in this area, and this talk by Nicky Birch covers some of the challenges.

State tracking. Some platforms really constrain how much world state you can preserve, which means that some IF methods aren’t feasible. Conversely, people who are brought in to write for interactive pieces in other media may have come from film, TV, radio, etc. I’ve had the pleasure of working with some terrific people who came out of conventional media, on projects like this. Often, there’s a bit of a learning curve to help them understand what interactivity might look like when it’s not just basically a CYOA book. It’s reasonable to expect that you may need to keep things simple, in this context — use variables sparingly.

Resource limits. Costs per branch of content go up if everything you add to the story has to be acted by live actors on a stage set for recording. This is the same set of considerations you’d get with AAA games, but the exact profile of the problem varies depending on the specific medium.

To create the desired amount of variation in the experience with limited resources, look at how you can break out the experience into multiple channels. Maybe you use the same film, but swap out the soundtrack to give it different significance in different situations. Maybe you have two audio tracks, one containing dialogue and the other containing the increasingly urgent ticking of the bomb, so that the conversation could happen in different orders but preserve the pacing indication.

Also, be aware of how resource limitations are going to affect the process of creation. If you’re used to text games, you may be accustomed to iterating a lot on the flow of text to get it right. Once you’ve spent a lot filming content, iteration becomes way more painful. Here you can do a certain amount with mockups, building Twine versions of the story to play through before you record something, etc.

At the same time, pay attention to how veterans of your other medium traditionally approach this problem. The idea of doing a table read for an interactive fiction might seem foreign, but it might be the right idea for an interactive piece that’s going to be acted.

Here are this blog’s archives on interactive film, audio, and theatre, though obviously I haven’t covered nearly everything in this space. I also recommend Brian Moriarty’s I Sing the Story Electric for coverage of some interactive multimedia projects that predate computer IF.

6 thoughts on “Mailbag: Adapting IF Skills to Adjacent Media”

  1. Thank you! We did several location based interactive audio pieces by now (Four! Does four qualify as several? Let’s say yes), and many of the challenges you mention ring a bell (a giant one at that). I hadn’t thought about the breaking out into multiple channels part yet – we actually do that by keeping voice and music/atmosphere separate, but it never occured to me to use this to alter the “feel” of a given scene by changing the background without changing the actual content. That seems promising.

    Our works so far have been commissioned by institutions not only outside IF or gaming, but also outside media altogether, a situation that carries its very own boons and challenges. It was great to work with resident archeologists to come up with a story for an open-air museum that is basically a life-size reconstruction of a Celto-Roman villa; it was really draining to collaborate with an environmentalist organisation to create a story for a stretch of renaturated forest – they were so desperately bent on educational content to please their sponsors that they made us cram so many facts into so little story that characters in there ended up talking mostly like wikipedia entries.

    Also, we found that the concept – and creative potential – of state tracking is immensely tough to convey to non-technical people who never played a video game. We consider this our main selling point – stories that react to what the listener does. Without it, our app would be just like any GPS-based location guide with some fancy music and voice acting on top. Unfortunately that selling point sails straight past most of the potential buyers we try to sell it to. But luckily for us, the fancy music and voice acting actually lift a lot of weight, it’s this that gets them on board when we give them listening samples from previous works, and then we can just go and sneak in our darling state tracking through the back door. Once they experience it up close in the finished work, they tend to like it too.

    To give some details about how we go about that stuff: I write the stories, script their logic and test them in Twine, using a customized Sugarcube story format. Each story fragment (which means basically each if-clause) is connected with an audio file, each passage in Twine corresponds to a physical location marked by GPS coordinates. Player choice is exclusively expressed by walking, headphones on, smartphone in pocket. It’s basically walking simulators, only with hyper-realistic walking.

    So, as an author, you can easily go all Dear Esther on a place, which actually can be very effective, but also more complex uses of state tracking with Big Decisions, subtle callbacks, gating and stuff are possible. It’s a medium with a lot of potential, and a lot of pitfalls.

    So, to conclude- I tend to lose friends and family for getting too excited and talking too much about this – if anybody out there does something even remotely similar, I’d love to get in touch for some exchange of ideas.

    1. I’ve actually been looking at putting together a London IF Meetup that covers interactive audio later in the year, because we have several people who I think could speak to that in interesting ways — and I bet your experiences would be really interesting to share there as well. I’m afraid I don’t know where you’re located, but if you’re sometimes in London, our meetup would be one possible place to discuss all this!

      1. Oh, ok! That sounds very appealing indeed. I’m located in Hamburg, London unfortunately isn’ part of my daily commute – but I’ve been looking for a reason to go visit. And talking to people who might actually like listening to me going on about that stuff sure seems like a good enough reason to me. So I’ll keep my eyes peeled for the date of that meetup, and I’ll definitely try showing up!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: