I. Moral Questions
What is morality exactly? At it’s core it’s about what is “right” and what is “wrong.” But those terms are themselves not well defined. And if you think about it, morality is a very expansive area of inquiry. Here are some questions that seem distinctly “moral” to me:
What sorts of goals should I pursue in my life? What goals should society be organized around?
What rules should I follow in daily life? What actions are permitted? What actions are forbidden?
What obligations do I have towards other people?
Who should I praise? Who should I scorn?
What sorts of actions ought to be punished?
Who should I seek to emulate? Who should I mark as worthy of emulation by others?
What habits ought I cultivate in myself? What habits ought I encourage others to cultivate within themselves?
Who should I associate myself with? Who should I avoid?
The above is by no means exhaustive. But I think it gives a sense of the scope of questions that might be considered “moral” questions.
I think some moral debate stems from conflict over which of these questions should be prioritized. Someone who is naturally goal-oriented may find consequentialist moral systems more appealing. They’re inclined to pick a worthy set of goals and then think of ways to bring those things about. But someone who is naturally more process-oriented may find deontological systems more appealing. They want to make sure they pick the right set of rules and then follow those rules with care. Realistically though, you need both. You need goals to motivate any sort of action, and then you need policies to guide your actions in the pursuit of those goals. Omitting either side will leave you with a big hole in your ability to navigate moral questions.
I have my own bias here. I think goals are more important than process. But I nonetheless think rules are important. Even the most hardcore consequentialist needs to figure out policies to follow and heuristics to apply. It’s impractical to constantly re-assess moment by moment and always pursue the exact action that optimizes all your preferences. Equally important, rules are necessary for cooperation. Binding everyone to a shared set of rules makes everyone better off.
On the other side of things, following rules with no regard for outcomes results in absurdity. Consider the classic conundrum in Plato’s Euthyphro. Socrates asks: what does it mean to be pious? And Euthyphro responds that it is simply what is pleasing to the gods. But Socrates continues to press, asking if it is pious because it is pleasing to the gods, or if it is pleasing to the gods because it is pious? Could something clearly offensive to justice become pious simply because it pleases the gods?
Another way to think about it: if the rule could be anything, why should it be something just and reasonable? Why do we all accept a rule to avoid killing each other instead of engaging in indiscriminate murder? It seems fairly obvious to me that the sorts of rules we choose to follow are rules that lead to harmonious social relations and overall prosperity. We don’t randomly select rules, we select rules that lead to outcomes we find preferable.
So rules and outcomes are both important. But I’ll stand by my bias: I think outcomes are primary. If a rule leads to terrible outcomes, then the rule is foolish and ought to be abandoned. Rules are a vital tool in our toolkit. But rules only make sense given some broadly understood goals we’re hoping to achieve by following those rules.
There’s a more subtle point I want to get at though. Suppose someone comes up to me and describes a situation. “Alice approached Bob and asked him if he could keep a secret. Bob agreed. Alice then told Bob that she was planning on murdering their mutual friend Charlie. Bob, horrified, told Charlie about Alice’s plot, foiling her murderous plan. Did Bob do the right thing?”
What I want to point out is that “did Bob do the right thing?” could refer to any number of different moral questions, each of which might have different answers. Let’s disambiguate the question:
Should Bob be punished for breaking a promise? If so, what punishment is appropriate? Alternatively, should Bob be rewarded for making the right choice in a difficult situation?
Should we publicly praise Bob as worthy of emulation? Should we encourage children to be more like Bob? Or should we shame Bob and tell children to not be like him?
How should an individual, possessing all the facts of the matter, privately feel about Bob’s choices?
Did Bob’s actions result in better or worse consequences down the line? Given Bob’s knowledge of things, what outcomes were predictable?
Was Bob mistaken to make the promise in the first place?
Conditional on having made the promise, should Bob have kept the promise, or was he right to cut his losses and break his word? What rules ought to be applied in terms of when to make a promise and when to break a promise? Is there any set of reasonable rules under which both of Bob’s choices (making the promise, then breaking the promise) are justified?
Should we publicize Bob’s decision to break a promise, or should we suppress this information as potentially dangerous?
What I want to emphasize is that all of the above are different questions. Answering certain questions one way does not immediately determine our answers to all the other questions. For instance, we might believe that the law ought to punish Bob because it is important to maintain certain standards. But privately we might feel Bob did the right thing, and commit to making the same decision if we find ourselves in a similar scenario. But we might still teach our kids to never break a promise no matter what, because they aren’t yet prepared to process a nuanced situation like this. And finally, we may notice that Charlie turned out to be a terrible person, and letting Alice murder him would have led to better consequences down the line. But we may excuse Bob for not foreseeing this, given that he had no way of knowing.
I don’t want to suggest that a proper disambiguation of moral questions would resolve all moral conflicts. I think different people may consider the facts fully and come to different conclusions. But I DO want to suggest that naive “utilitarianism” or “deontology” are incredibly crude instruments, essentially unfit for any sophisticated moral thinking. Trying to reason purely from consequences will miss enormously important questions around what incentives are being created and what rules we want to encourage. Reasoning entirely from pre-determined rules will fail to consider if the rules themselves are fit for purpose or in need of amendment.
In the next few sections, I’m going to lay out the building blocks that I think are necessary for working through moral questions.
II. Normative Statements
Let’s look at a statement like “you ought to do X” or “you should do Y.” I think we can get a lot of clarity on these sorts of statements by stepping outside of morality for a moment. For instance, I might express to a friend that I’m looking to buy a car, but it’s important to find a reasonable price. The friend might respond, “Oh, you should talk to Dave. He has a car he’s been trying to sell. He’ll give you a good deal.”
What does this statement mean? Well, the way I read it, it’s suggesting a course of action that will help you achieve your stated goal. It’s sort of got the structure of a conditional:
If you want to get a good deal on a car, you should talk to Dave.
Another way to look at it is that it is an assertion about what course of action aligns with the stated goals at hand. Your current course of action is failing to achieve the outcome you want. This alternate course of action is better aligned with your objective, so you should consider pursuing it.
I think that porting this structural understanding back to morality is illuminating. When we say you ought to do something, we mean that the proposed action is well-aligned with whatever shared moral objectives are understood to be in play. When we say “you ought to be kind to strangers” we mean, “given my understanding of our shared moral objectives, being kind to strangers ought to help bring those objectives about.”
III. Preferences
Preferences are just things we want. Humans tend to overlap considerably in preferences, but the overlap is not total. Preferences may also come into direct conflict if multiple people want the same scarce thing.
In my moral framework, morality is ultimately about aligning our actions so as to bring about the outcomes we prefer. This involves both personal alignment as well as societal alignment. Societal alignment is necessarily a negotiation, a compromise between different individuals. Fortunately, humans overlap enough with each other that mutually beneficial cooperation is typically possible.
The approach to morality I’m going to lay out here does NOT offer any metaphysical backing to anyone’s preferences. There is therefore no ultimate metaphysical source of good and evil. But insofar as we have preferences, we can engage in moral reasoning to try and satisfy the preferences we find ourselves to have.
Is this position relativism? I’m not sure. I don’t have any tool for deciding what the “correct” preferences might be. But given you already have things you care about, I think your actions can be objectively better or worse aligned.
IV. Policy
A policy is a pattern or guideline for moral conduct. The function of a policy is to help align our actions with our preferences. It is not generally possible to directly predict the outcomes of any particular action. So in practice, we need to adopt policies that guide our actions.
For instance, I may choose to establish a policy of telling the truth. In general, I realize that I benefit when other people know they can take me at my word. I may also simply observe that things go better for me when I am honest. I could of course try and make a decision in each particular scenario about whether or not to be honest. But adopting a policy is simpler. It also makes my actions more predictable to others, particularly if I make a point of sticking to my policy even when it causes problems.
Policies are ultimately made to serve our chosen ends. It is a mistake to stick to a policy that is causing pointless pain. Policies can always be amended or complicated. Though as we will see in the next section, shared policies may require renegotiation.
V. Cooperation
What happens if we have conflicting preferences? For instance, I would like to be named god-emperor of earth. I imagine many others have similar preferences. But these preferences are in conflict - we can’t all be god-emperor. Moreover, many people who would enjoy being god-emperor would, as a second-best option, prefer there be no god-emperor at all, and instead prefer we use some other mechanism for determining who gets to be in charge of things (for instance, a democratic election).
In general, everyone is better off when we cooperate on shared preferences. But cooperation can be derailed by people who refuse to cooperate and instead chase their own preferences with no regard for others. Such people may free-ride off the work done by society without contributing anything back. The solution to this problem is that we, through various methods, negotiate shared policies that everyone is expected to follow. Shared policies are thus rules. People who violate the rules get punished. The point of punishment is to make sure that even a totally self-interested person finds it in their best interest to fall in line with the rules.
Cooperation, defection, punishment, and collective action are all immensely complex questions and I’m not going to dive into them too deeply here. But for our purposes, the important thing is that we’re trying to build a society where everyone is incentivized to do things that advance the broadly shared preferences of the community. This is, once again, downstream of consequences: we want certain things, so we organize society such that those things occur. This process often involves creating rules for conduct. It may also involve creating laws, special rules that are enforced with particular rigor (usually by a political authority itself bound to very particular rules of conduct).
There’s another bit of alchemy that happens when humans cooperate on mutual preferences. We often elevate shared preferences to the level of a moral value. A value is a special type of preference - it is a preference not just about oneself, but about the community as a whole. A value is often a preference about the type of community we want to live in. For instance, we may prefer to live in a community where the poor are taken care of. We may prefer to live in a community where the elderly are respected. We may prefer to live in a community where certain religious ideas are widely shared. These values can become powerful enough to transcend many ordinary selfish preferences.
Values are special because they elevate humans beyond self-interest. Values expand the circle of concern from oneself, to one’s family, then to the community or even beyond. As I see it, values are an emergent property of strong, high-functioning communities.
If I were to be pressed on finding a metaphysical source of good, it would be in these communal values where people subsume their personal interests to a communal interest. But for now, I prefer to keep the metaphysics out of it, and just say that values are a special type of preference.
IV. Character and Virtue
People frequently express a preference but then act in ways that are not aligned with that preference. For instance, we might say that we value honesty, but nonetheless make occasional dishonest statements. Sometimes that’s because it comes into conflict with a different preference we care about more. But often, we fail because we simply fail to live up to our own ideal, doing what is easy instead of what is right.
This is the key observation of virtue ethics. Operating somewhat orthogonal to other ethical systems, virtue ethicists point out that humans are essentially creatures of habit. If we are accustomed to a behavior, we will tend to engage in that behavior regardless of whether it is appropriate or not. So the key moral project is to cultivate the appropriate virtues within ourselves. If we are in the habit of doing the right thing, then we will naturally do what is right, even in difficult situations. We can refer to the set of moral habits a person cultivates as making up a person’s character.
I think that we ought to understand virtue ethics as an insight that can be “bolted on” to any ethical system we want. Whatever specific things we’re trying to do, whether we’re trying to stick to rules or pursue certain goals, cultivation of the right habits is going to be crucial for success.
And in practice, this is the real work of morality. We all tend to agree on things more than we disagree. Almost everyone agrees that honesty, integrity, courage, discipline, care, discretion, etc. are important to doing the right thing. If you actually cultivate these virtues, you will be succeeding under almost any imaginable ethical system.
V. Holistic morality
If we put these pieces together, I think we have a compelling understanding of morality. At the root of things are preferences and values. These are the things we care about. Preferences and values are filtered through character, rules, and policy. This produces an action.
The above process, applied iteratively, creates outcomes. Sometimes an individual action has profound consequences. But more often, outcomes are shaped continuously, in an ongoing way, as we repeatedly choose actions that are consistent with a particular policy that is well-aligned to a desired outcome.
A failure at any point in the chain can cause problems, and is fair game for moral criticism. But when everything is working, human behavior aligns so as to produce outcomes that are broadly beneficial.
Let’s revisit some of our moral questions from the beginning:
What sorts of goals should I pursue in my life? What goals should society be organized around?
You should pursue your own preferences and values. Society ought to be organized around shared values.
What rules should I follow in daily life? What actions are permitted? What actions are forbidden?
You should seek first and foremost to follow the law where applicable (with possible exceptions if you are explicitly seeking to amend the law). Then you should conform to the typical rules of conduct that are considered appropriate within your community. Other than that, you may do as you see fit in pursuit of your own preferences.
What obligations do I have towards other people?
You are obligated to follow the rules and laws established within your society.
Who should I praise? Who should I scorn?
You should praise people who further shared values you think are important. You should scorn people who fail to live up to shared values, or who otherwise break the rules.
What sorts of actions ought to be punished?
Actions that violate societal rules, particularly formal laws, ought to be punished.
Who should I seek to emulate? Who should I mark as worthy of emulation by others?
You should seek to emulate people who seem particularly virtuous. You should praise such people so as to encourage others to emulate them as well.
What habits ought I cultivate in myself? What habits ought I encourage others to cultivate within themselves?
You should cultivate habits that naturally align with your values and preferences. You should encourage others to do the same.
Who should I associate myself with? Who should I avoid?
You should associate with people who are well aligned with values you think are important. You should avoid people who are poorly aligned with your values, or who regularly violate social rules for selfish reasons.