Human Errors describes a world in which human attention (and empathy, care, and understanding) are a severely limited resource. The player plays the role of a contractor brought in to triage support tickets on a product that (we quickly realize) has a rather alarming range of functionality. What you’re supposed to do is close as many tickets as possible, while prioritizing only the undeniably critical ones.
You also have the option — not preferred by the company — to follow up with particular users and try to get more of their stories. Here, you can engage either as a nameless QA figure or via personal email.
But engage too much, with too many people, and the company will start to view you as inefficient, or as going outside the proper parameters for engagement, and your access to the system will be cut off entirely. So you’ll have to budget your sympathy, dole it out cautiously, try not to get in trouble too quickly.
A single interaction node looks like this:
There are, of course, other games out there working through the interface of simulated email or other computer-mediated messaging — I recently covered Grayscale, which puts you in the role of an HR employee resolving complaints, for instance. Several of Christine Love’s pieces act similarly.
But what I particularly like about the system in Human Errors is the way it combines the guided and the open-ended, the effective and the reflective choices. If you choose to close an issue, it goes away, is no longer your problem. If you write an email to a user, you get a brief time — enough to type a short sentence or two — to type whatever you want before the text box fades to “Sent.” It’s not expressive in the sense of a parser input, because you’re not constructing a complex command all of whose aspects will be understood by the game; but it does allow and indeed encourage the player to express something.
To expand a little on how the choice mechanics work here, in light of the categories I define in that old article:
- Effort: varied: you can choose whether to make something to away in a click or two, or you can type an answer and invite more demands to come in
- Expressiveness: the illusion of expressiveness only, but that feels important, in context
- Ambiguity: you’ll learn the system in a bit, and it does act consistently, though it takes some exploration at first
- Discoverability: high; there are no meaningful hidden moves. (Or, if there are, they’re so hidden that I didn’t find them in five or so playthroughs.)
- Pressure: low on any individual moment of choice, but you can run out of options overall
- Embodiment: high; this is very close to being the actual interface you would use to actually perform this job
One of the infamous things about choice-based interfaces is that they permit navigation on autopilot, that you can get into a phase of just clicking your next option without even fully reading all the text that comes before it. (At an IF Meetup on UI design, Joseph Humfrey talked about how 80 Days deliberately fades in the choices just a bit later than the main text, to prevent that rhythm of leaping to give an answer before we’ve even read the question.) Human Errors is about navigating on autopilot, or choosing not to, and it gives us a choice interface that incorporates both of those possibilities.
This theme is present in the content also. Many of the stories of individual users highlight horrors in application of technology, machine learning, or mechanical-turk exploitation of workers who are paid almost nothing to provide information on normative human behavior. Your character, trying to cope with all this, is roboticized and artificially limited. Except for those windows of room to type out an answer — an answer that won’t be interpreted meaningfully by the system, but gives you a space, anyway, to make a reflective choice about what you think should be said at this moment, whether that’s “I’m so, so sorry” or “please tell me more about the manure.” Those moments to type invited me to invest more deeply, to think more about the other characters, to perform my character, before moving on.
The support tickets themselves are extremely tightly written, as they have to be: they need to establish the world’s premise and its horrific spin-off details, as well as the personalities (not always likable) of the ticket-senders, in a very compact space. Sub-Q imposes a tight word limit, but this is a piece that makes the most of every sentence. Morayati requires the reader to do a certain amount of interpretation and reading between the lines — but there’s a lot of material to work with, characters emerging into three dimensions from just a few sentences of email.
Sooner or later, you’ll do something that irritates your supervisor, and then you’ll be warned that you’re about to be kicked off the system. Critically, you’re left with a grace period of a couple more actions — actions that I used to, for instance, send my personal email address to a character I especially pitied. I did feel that this kicked me out of the experience while I still wanted more, and was still curious about the outcomes for the people I was reading about — so I wound up replaying several times to try to learn more.
And I think that was about the ideal experience. On those multiple playthroughs, I came to feel I had a grasp on the nature of this world… and also of the precise limits of my agency in it. Fixing the corporate mess of this world is not possible. Expressing some solidarity in isolated cases is possible; and while that’s pathetically less than one would like to be able to accomplish, it makes for a world that is not completely hopeless.