Moral Discernment as a Game Mechanic

Recently I’ve been trying to figure out what to do about a bad situation. I don’t see a solution that doesn’t violate some value I consider important. There are many possible things to do or refrain from doing, and I am concerned about the consequences of almost all of them. The stakes are pretty high. Outcomes affect quite a few people. It is hard to calculate the risks. People I love disagree with me about what the priorities are, and about who has a right or responsibility to do anything.

This is the kind of thing that makes moral choices hard in real life, in my experience. It’s not “kill puppy or save puppy.” Nor a Fate-style escalation, “poke puppy / kick puppy / make puppy ill / kill puppy” — though I realize that was partly philosophical thought experiment.

It’s not “there’s a very very painful, possibly heart-breaking, thing that you morally have to do.” I’ve been there, too, but while that situation might be miserable, it’s at least clear. There’s no A vs B. There is only option A, and what you have to do to get through A, and what you can salvage when A is over. Not a choice mechanic but a challenge mechanic.

It’s not “I have so many feelings about this that I don’t trust my own motives.” I do have feelings. I’ve also had quite a bit of processing time.

It’s not “I habitually do not behave as well as I want to behave, and I need to do the slow work to improve those habits: keep my temper, drop an addiction, work out more often, stop saying yes to projects I don’t have time for.”

This is something else. It’s a decision, but it’s a slow, complicated, multipart decision in which the possibilities all seem at least somewhat sickening, and not all the possible solutions are visible at the outset.

And I keep thinking: have I ever played a moral choice that felt like this? What would that even look like?

Recently, also, there has been a lot of talk about the infamous Google gender memo. Heaven knows the internet doesn’t need another take on that, and frequent readers of my blog can probably guess what I think about its contents.

But because all of this has been circulating, I read Yonatan Zunger’s response post and this line chimed with me:

Engineering is not the art of building devices; it’s the art of fixing problems.

And I thought: yes, because of how my brain is furnished, I’ve been trying to approach this moral problem as an engineer. I’ve been thinking about what needs to be fixed, and what the highest-priority fixes are, and what the consequences are if those fixes don’t happen, and how one might tactically approach each of them, and what open questions are left to be resolved. But I also know that even my best work on this is probably going to end with something unhappy.

I’ve been thinking about the principle of double effect, and about Catholic doctrines about how to approach the undesirable side effects of a moral action. I am not a Catholic, but alongside other things I don’t believe or don’t agree with, the Catholic tradition contains centuries of research into moral philosophy.

floatpoint.jpegThere’s one game I’ve written that tries to explode a single moral choice into a large space — though it’s generally considered one of my weaker serious pieces. Floatpoint is about the question of whether to rescue an advanced group of genetically altered and self-altering people by bringing them back to Earth, and if so, under what conditions. The colonists have been away from Earth for centuries, and are now so different from Earth-based humanity in both mores and physical form that their presence is likely to be disruptive and possibly dangerous. However, if left on their home planet, they will probably not survive. And they have some things of value to offer to humans as well.

One of my major goals with this was to allow the player to articulate a complex choice where they had to think about the components of the decision. So you’re constrained to a symbolic language when making a treaty at the end of the game, where physical objects can communicate concepts about whether you will or will not accept the colonists back on Earth but also what limits you’ll put on them when they get there. In addition, the design of the game tries to make sure that in order to get the physical objects needed to communicate any particular decision, you’ll have to encounter some more information about the nuances of that choice.

It’s… well, it’s trying to do morality and communication through the medium of a parser IF world model and medium-sized dry goods. It’s not terribly realistic, and arguably it doesn’t give nearly enough build-up to make the player feel the implications of those choices. I also didn’t really think about it in terms of making a mechanic out of the discernment process itself.

So what would a mechanic look like if it were trying to capture prioritization of principles, the study of stakes, the consideration of risks?

Maybe:

Sift. You start with dozens, hundreds, maybe even thousands of action choices. Before you pick, you select rules for yourself and watch the list of possibilities narrow, as though you were filtering AirBnb. “Only actions that won’t cause trouble for my friends.” “Only actions that will definitely remove my enemies.”

This feels distant and cerebral, though. You could make some point about how certain goals are mutually incompatible, but exploring the database to find that out feels like not much of a story.

Elaborate. You start with a single choice lurking at the bottom of the screen, a single idea about what to do next. It has a display suggesting probabilities and outcomes for that choice, but the display starts out pretty vague.

But you don’t have to click it immediately. You also have ways to interrogate or vary that choice. Use the MITIGATE RISK tool on your choice and it’ll spawn a second choice, safer but maybe less powerful. Use the RESEARCH tool and it’ll give you more precise information on your success odds, or suggest horrible new outcomes.

It’s easier (for me) to imagine how fiction could be attached to these actions, and how the result could feel something like a story, than with the first mechanic. What I don’t like so much about this one is that it is so purely focused on practicalities rather than principles; it seems to suggest that a perfect solution is available and that the process of arriving there is technocratic.

 

8 thoughts on “Moral Discernment as a Game Mechanic”

  1. “I don’t see a solution that doesn’t violate some value I consider important. There are many possible things to do or refrain from doing, and I am concerned about the consequences of almost all of them. The stakes are pretty high. Outcomes affect quite a few people. It is hard to calculate the risks. People I love disagree with me about what the priorities are, and about who has a right or responsibility to do anything.”

    You are Crown Princess Elodie and I claim my 5 lassi.

  2. You’ve done it again, Emily– made me think. I find this idea/concept very interesting. IMO, as you suggest, there are two kinds of approaches to making decisions, logical and emotional. In our culture, it is generally accepted that logical is superior to emotional. For example, the admonishment “You’re just being emotional,” is considered to be highly critical. Personally, as I get older and a little wiser, I no longer go along with the generally accepted prejudice against emotional approaches to handling life’s challenges. Consequently, I would suggest that the game mechanic be based on how the player feels about the various situations presented.

  3. The German philosopher Axel Honneth has argued that in making moral judgements, one should be careful to avoid abstracting from the social roles and relationships one already finds oneself in. He distinguishes between being impartial, which requires bracketing those attachments which might bias or prejudice one’s decision; and the unreasonable expectation (as demanded, for example, in the Kantian moral tradition) of abstracting even from those norms which are constitutive of our social relationships. Using the example of a professor who discovers that his colleague has engaged in some harmless plagiarism, Honneth writes:

    “For our professor, this means that he cannot simply ignore the implicit norms of friendship and collegiality. However he may judge the situation and whatever he may end up doing, he must relate to these norms in some way, for they determine, virtually externally, both the starting point of his moral conflict and the possibilities for finding a solution…. [A]ll moral discourse presupposes forms of mutual recognition that are so constitutive of the social environment that they cannot be questioned or suspended by its members.” (Freedom’s Right, pp. 111-112)

    Ultimately Honneth is concerned with a notion of “moral freedom,” which of course is not your focus here. But leaving that aside, I couldn’t help wondering whether part of the reason you feel unsatisfied with the “technocratic” approach to moral decision-making is that it goes too far in abstracting from the social relationships and norms within which moral dilemmas would normally be embedded.

  4. This is a fascinating approach to moral choices. One area I could see such a tool providing value outside of a specific fictional situation is in recognizing consequences outside of one’s own limited worldview. Thinking back to Versu and the social simulation between small groups, what gets modeled is skewed by our biases and the moral repercussions expressed in game reflect and amplify those biases. I like that this model has more transparency around impacts as it exposes gaps to an end user who can help improve the breath of implications to an individual’s choice.

  5. In an interactive story you could make this choice about pain allocation. There are bad consequences of every possible action, and many parties the pain can be visited on. Do you visit all the pain on one party and potentially destroy it? Or do you spread out the pain on many parties any become unpopular with everyone? How much do you sacrifice yourself and therefore distance yourself from your goals?

  6. Something I’ve been exploring along these lines is to have a series of actions, responses, and suggestions arise in rhythm from a subconscious set of possibilities, and you are limited to choosing from those that come out in order to construct a response to any given moment. This has been about individual responses and not complex, multi-step decisions (which are still yet to be worked out).

    Then the tools that you develop can introduce new types of responses you can have. They are about personal growth and learning how to do things. Tools could also similarly be filters that change which types of responses come up at different times. You train yourself to respond to sadness with empathy/mirroring, or with suggestion, or with consoling. And then develop intricate capacity over time, a way of navigating yourself in situations and identifying when some tools versus others are useful.

    Just describing and understanding how consequences will come about is a real challenge in life. I think the messy long-term prioritization that happens in character definition allows for more of this ambiguity and avoids the surgical feeling of filtering.

    These ideas for me have been about telling a story of growth and change, and about how people make decisions. It also has been driven by a desire to simulate meaningful characters and stories that are not explicitly written. Which perhaps isn’t what you’re going for. But I hope these ideas are interesting to you. You touched on multiple aspects of them with your presented options, and this is some of the first writing about it that I’ve seen on the internet. Thank you.

  7. I think the challenge is largely in the writing and not in the mechanics. I’m going to incorporate a choice like this in my Choicescript WIP. I’m nearing the end of the first draft so the timing is apt.

  8. Avoiding technocracy suggests designing choices to encounter some voting paradox.

    intransitivity (A > B > C > A):

    A small but frequent Sift motif occurs in (Apocalypse) World dice resolution (effect production), in the form of ‘pick k±1 of n’ lists. These include both useful effects and ‘prevent negative consequence’ items (to inevitably be passed over, deliberately).

    Would be amusing for an Elaborate to yield an intransitive set of choices due to composition of utility functions (whether player’s own, or their constituents’). This could be really mystifying if the STRAW POLL tool reports (paradoxical) pairwise majority preference, pushing the player into an expensive series of INTERVIEWS.

    irrelevant alternatives (A>B, A>C, B>A>C):

    This looks like a penalty for opening up too many choices in a nondeterministic Elaborate, where choice C draws some of your advisors/attention/resources off of A, allowing some nonlinearity to amplify B. In a discrete setting, I don’t know.

    dictatorship (single A>B vote rules out B):

    Oh wow this evokes a whole subgenre of denied choice (specifically crushing it, not rewarding – as would a taunting SMEN-style sink or a tempting puppy to kick) that isn’t immediately relevant to voting.

    To deciding, though, between sickening possibilities and obscured solutions? Yes, I think so. Some needles out of the haystack:

    Life is Out of Season (odditycollector) offers a great many opportunities to resist, and very few to succeed. But multiple, which incentivizes a certain ruthless efficiency. It does not assume much knowledge of Homestuck.

    Dragonslayer (potatocubed) is textured with choices of attitude, moreso than defiance, and the reactive text is hardly able to keep up. The threads fray into a fascinating kaleidoscope, despite the ending-set which seeks the same inevitable.

Leave a comment