Ethics Meta
Aug. 28th, 2012 12:45 pmI ran across an interesting essay that says all ethical systems can be put into three categories: deontology, virtue ethics, and consequentialism.
A deontological system judges actions as good or bad according to whether or not they adhere to a set of rules or laws. The rules themselves are givens; they are axiomatic, not derived from within the system.
A virtue ethics system is concerned with the character of the actor. If somebody intends to do good, then the action is morally good, and vice-versa. The characteristics of a good person are again axiomatic.
Consequentialism judges actions based on their outcomes. Did the action result in a net increase in some metric of goodness (generally taken as human happiness)? Then it was good.
There are also hybrids, like voluntarism, which says that actions are good if they are good under both a deontological and a virtue ethics system: you have to follow the rules AND mean well to do good. And you can nest virtue and deontology within consequentialism by regarding them as guidelines that generally (but not universally) lead to good outcomes; caching the results of your ethical calculus, as it were.
Which is well and good if you only ever contend with a single ethical system. But the nagging question is: how do you choose between competing systems? If I'm presented with two deontologies, one of which has rules that say that X is good and Y is bad, and the other of which says the opposite, how do I decide which one is right?
Obviously, you can just say "oh, this is the one I grew up with, so I'm going to stick with that". Or pick an authority figure to delegate your agency to. But if you honestly want to give each one a fair shake, to say "what if?" and sort through the implications and weigh the systems using some method that isn't just pure subjectivity, to use some kind of consistent framework that you might expect other people to use to come to a similar conclusion -- or to come to a different conclusion, but in such a way that you'd at least be able to understand where your differences are coming from...
It seems like the only way to do that intercomparison is using something that's basically consequentialism. In which case, it seems like you can take a very handy shortcut and just skip straight to the only self-consistent solution. Happily, it's also the one with axioms that are the closest to universality we can get, being rooted in the commonality of human experience, which makes it a lot easier to bridge the gap between people with major differences. Maybe this is why it's so pervasive in the modern world.
A deontological system judges actions as good or bad according to whether or not they adhere to a set of rules or laws. The rules themselves are givens; they are axiomatic, not derived from within the system.
A virtue ethics system is concerned with the character of the actor. If somebody intends to do good, then the action is morally good, and vice-versa. The characteristics of a good person are again axiomatic.
Consequentialism judges actions based on their outcomes. Did the action result in a net increase in some metric of goodness (generally taken as human happiness)? Then it was good.
There are also hybrids, like voluntarism, which says that actions are good if they are good under both a deontological and a virtue ethics system: you have to follow the rules AND mean well to do good. And you can nest virtue and deontology within consequentialism by regarding them as guidelines that generally (but not universally) lead to good outcomes; caching the results of your ethical calculus, as it were.
Which is well and good if you only ever contend with a single ethical system. But the nagging question is: how do you choose between competing systems? If I'm presented with two deontologies, one of which has rules that say that X is good and Y is bad, and the other of which says the opposite, how do I decide which one is right?
Obviously, you can just say "oh, this is the one I grew up with, so I'm going to stick with that". Or pick an authority figure to delegate your agency to. But if you honestly want to give each one a fair shake, to say "what if?" and sort through the implications and weigh the systems using some method that isn't just pure subjectivity, to use some kind of consistent framework that you might expect other people to use to come to a similar conclusion -- or to come to a different conclusion, but in such a way that you'd at least be able to understand where your differences are coming from...
It seems like the only way to do that intercomparison is using something that's basically consequentialism. In which case, it seems like you can take a very handy shortcut and just skip straight to the only self-consistent solution. Happily, it's also the one with axioms that are the closest to universality we can get, being rooted in the commonality of human experience, which makes it a lot easier to bridge the gap between people with major differences. Maybe this is why it's so pervasive in the modern world.
no subject
Date: 2012-08-28 07:38 pm (UTC)no subject
Date: 2012-08-28 07:39 pm (UTC)no subject
Date: 2012-08-28 07:48 pm (UTC)no subject
Date: 2012-08-28 07:55 pm (UTC)no subject
Date: 2012-08-28 08:01 pm (UTC)Basically, I haven't been able to think of a way to do an intercomparison of ethics within a deontological or virtue ethics framework that doesn't end up begging the question, but I'd be delighted to be proven wrong, because that promises to be really interesting.
no subject
Date: 2012-08-28 08:08 pm (UTC)I'm going to read the whole linked essay later, but I wanted to say that I'm not sure how he describes virtue ethics is quite correct, at least by my understanding. I don't think virtue ethics is about intention. It's about character, in the sense that it posits traits that good people have. Unlike the others, it doesn't focus so much (or so directly, anyway) on what people DO, but what they are. An interesting effect of this for ethics is that actions are measured by any number of measuring sticks at the same time and there must be a fairly organic method of assessing things. For instance, in a given situation we might say someone comported themselves in a very honest way, but not a very compassionate one. There are no rules for weighing two virtues against each other, really. You just must apply each of them as best you can in any given situation. And because (according to the ancients anyway) virtues are habits, not structures of reasoning, they are applied differently and in different admixtures by different people and that's okay.
The problem with finding the "meta-ethics" for various systems is that ultimately you come down to some kind of unsupported axioms. It's harder than it sounds, I believe, to determine and agree on what human happiness is in consequentialism, for example. An interesting element of virtue ethics, I think, is that it doesn't really try to formulate a universal vision or outline every rule to follow. New virtues can essentially be discovered or revealed as culture and society change.
no subject
Date: 2012-08-28 08:26 pm (UTC)Agreeing on happiness is indeed tricky; I think it vastly simplifies the problem to decompose it into something like Maslowe's hierarchy of needs, rather than treating it as a unitary measure.
I like idea that when virtues come into conflict, the system just punts and says "do the best you can". That's a very mature way of dealing with it. My own leaning is to say that consequentialism is what you have to have underlying whatever system you use, but that virtues are a very useful tool for summarizing the results of a consequentialist analysis. They're macros, basically, and 99% of the time, they'll do the job. (Just remember to keep track of the assumed context for them...)
no subject
Date: 2012-08-28 08:28 pm (UTC)no subject
Date: 2012-08-28 08:30 pm (UTC)no subject
Date: 2012-08-28 08:32 pm (UTC)Personally, I think we actually pick by having some results that we are willing to say "this is good" and "this is bad" about before we start reasoning about ethics, and we pick a system that results in most of our pre-made good/bad judgments being supported. After all, if you don't have *some* idea what the words mean, you can pick any system you want, and it doesn't matter. My complicated system for telling the difference between Blurgle and Farb can be whatever I want, and no one will care.
no subject
Date: 2012-08-28 08:38 pm (UTC)no subject
Date: 2012-08-28 08:49 pm (UTC)I think you're definitely right about what we actually do. I want to say that this bootstrapping from intuition maps in a fairly clean way to a consequentialist analysis with axioms based on human psychology. I want to, but I won't, because I think it'll take a fair bit of thought to determine whether (or rather, how much) that's actually true.
Thanks!
no subject
Date: 2012-08-28 09:03 pm (UTC)I think you're selling virtue a little short. If you think kindness is a virtue, then it prompts you to act in a way that is kind. There are plenty of situations where that's useful guidance. Right?
no subject
Date: 2012-08-28 09:23 pm (UTC)no subject
Date: 2012-08-28 09:39 pm (UTC)As for uncertainty, no, it's not a big problem. But if the goal of an ethics system is... well, maybe I should back up and ask what, in your argument, the goal of an ethics system is? If it is (as I am inclined to think of it) to provide guidance about how to act, then it should... well, provide guidance on how to act. Rules are straightforward that way: they *tell* you how to act. Virtue ethics and consequentialism (as you've defined them) give general principles on how to decide what to do. That makes them more flexible and adaptable to new or unexpected situations, which is great. But it also means you have to do a lot more work in the moment to decide what to do - you have to figure out how to apply those principles to the situation you're in. Sometimes that's really hard, and it can be helpful to have a rule to fall back on.
no subject
Date: 2012-08-28 10:12 pm (UTC)no subject
Date: 2012-08-28 11:05 pm (UTC)I guess I think about this stuff because when I was coming out, I found myself with a conflict between the rules of Mormonism and the values of myself and other people I cared about, and I had to reason myself to a resolution. And I feel like it would be good to develop that experience into something of general use, rather than everyone having to make it up on their own...
no subject
Date: 2012-08-28 11:20 pm (UTC)no subject
Date: 2012-08-29 04:17 am (UTC)That is, a consequentialist moral system says that the moral value of an act derives from the expected change in X from that act, where X is whatever we value.
But what do we value?
Reducing suffering? Increasing joy? Increasing choice? Increasing distinct lives? Increasing total tonnage of life? Increasing amount of blue things? Increasing God's appreciation for humanity? Some combination?
Different values => different consequentialist moral systems, and they're incommensurable.
So if we want intercomparability, we need to ask how to compare values.
Well, either that, or hope that really deep down all life forms that matter value the same thing. (I find that unlikely, but I know people who believe it.)
no subject
Date: 2012-08-29 11:19 pm (UTC)no subject
Date: 2012-08-30 01:01 am (UTC)no subject
Date: 2012-08-30 04:03 am (UTC)I figure for time-critical decision-making, we mostly make judgments in advance and cache them as rules / virtues / principles / heuristics / whatever to be invoked on the fly. But if you have to make complex ethical judgments quickly, I think they'll frequently end up wrong no matter what system you're using, and there's probably no way around that.