University of Kansas, Spring 2004
Philosophy 160: Introduction to Ethics
Ben Eggleston—eggleston@ku.edu
Lecture notes: normative ethics
The following notes correspond
roughly to what we cover, including at least a portion of what I put on the
board or the screen, in class. In places they may be more or less comprehensive than what we
actually cover in class, and should not be taken as a substitute for your own
observations and records of what goes on in class.
The following outline is designed to
be, and is in some Web browsers, collapsible: by clicking on the heading for a
section, you can collapse that section or, if it’s already collapsed, make it
expanded again. If you want to print some but not all of this outline, collapse
the parts you don’t want to print (so that just their top-level headings
remain), and then click here to print this frame.
- EMP, chapter 6: “Ethical Egoism”
- normative ethics
- what a normative-ethical theory is
- a theory offering a principle for determining what is right in any situation
- examples of such principles
- The right act is the one that creates the most overall happiness.
- Do unto others as you would have them do unto you.
- You always ought to do whatever will advance your self-interest the
most.
- the standard method for assessing a normative-ethical theory
- See whether its implications, for particular cases, are appealing or
agreeable or acceptable.
- For the most part, this means seeing whether its implications agree with
common-sense morality.
- overview of sections 6.1–6.3
- section 6.1: “Is There a Duty to Help Starving People?”
- the principle of ethical egoism
- Ethical egoism says that each person ought to
maximally advance his or her own self-interest.
- This is meant as a moral principle—that, in any situation, the
right thing for a person to do is whatever will maximally advance his
or her self-interest.
- how related to psychological egoism
- The phenomenon to which it refers—that of an
individual maximally advancing his or her own self-interest—is
the same phenomenon to which psychological egoism refers.
- But ethical egoism refers to this phenomenon in an entirely different
way from the way in which psychological egoism refers to it. Whereas psychological egoism is a descriptive theory
(claiming that people do behave in a certain way) ethical
egoism is a prescriptive theory (claiming that people ought
[morally] to behave in a certain way). One can be an ethical
egoist without being a psychological egoist, and vice versa.
- Also, recall that psychological egoism is a meta-ethical
theory, while ethical egoism is a normative-ethical one.
- table
- header row: PE vs. EE
- first row: descriptive vs. prescriptive
- second row: meta-ethics vs. normative ethics
- some things to notice about ethical egoism
- It does not just say that, from the moral
point of view, one’s own welfare counts as well as that
of others. Rather, it says that, from the moral point of view, only
one’s own welfare counts, and others’ does not, when
one is making a moral decision about how to act.
- Ethical egoism does not forbid one to help
others, or require one to harm others. It just says that
whatever moral reason you have to help others, or not harm them,
must ultimately stem from the way in which helping them or not
harming them
helps you.
- Ethical egoism does not say that one ought
always to do what is most pleasurable, or enjoyable. It
acknowledges that one’s own self-interest may occasionally
require pain or sacrifice.
- a method for applying the principle of ethical egoism to a
particular situation
- List the possible acts.
- For each act, see how much net good it would do for you.
- Identify the act that does the most net good for you.
- That’s the right thing to do.
- section 6.2: “Three Arguments in Favor of Ethical
Egoism”
- first argument
- The first argument is based on the claim that
“looking out for others” is self-defeating: that is, when we
try to help others, it’s usually worse, overall (i.e., for all
of us), than if we pursue our own interests.
- The problem with this argument is that it
contradicts the core idea of ethical egoism, by resting
ultimately on a principle of benevolence rather than genuine
egoism. That is, it endorses ethical egoism only as a strategy
for pursuing some other value—apparently overall
well-being; it does not endorse ethical egoism as the
fundamental principle of morality.
- second argument
- The second argument involves the observation
that a purely altruistic approach to ethics would be overly
demanding, by treating each person as a means, or resource, for
other people, and thus fails to respect the integrity and
independence of the individual human life.
- The problem with this argument is that there
are other approaches to ethics that are, in a sense,
“between” ethical egoism and pure altruism. For example, our
ordinary “common sense” approach to morality rejects
ethical egoism without being as demanding as a purely altruistic
approach would be.
- third argument
- The third argument is that ethical egoism provides the best
explanation, and unifying principle, for the various moral duties
that we think we have, such as the duty to tell the truth and keep
our promises (i.e., the moral judgments of common-sense morality).
- One problem with this argument is that ethical
egoism does not explain all of the moral obligations we think we
have. It does not explain, for example, why we ought not to break
a promise when it would be to one’s advantage to do so. A
second, and deeper, problem with this argument is that it
suggests that the only, or most fundamental, reason one has to
treat other people well is that it would be beneficial to
oneself to do so. This clashes with our intuitive conviction
that even when treating others well is to our advantage, there
are deeper, non-self-interested reasons, for treating them in
that way.
- section 6.3: “Three Arguments against Ethical
Egoism”
- first argument
- One argument against the theory is that the
theory does not provide a way to adjudicate conflicts of
interest. For example, if my interest is opposed to yours, then
ethical egoism—by telling each of us to maximally pursue his
or her own interest—does not specify some compromise that we ought
(morally) to agree to; it just tells each of us to do his or her
best, and then whatever happens, happens. And yet many people
think that a moral theory ought to resolve our conflict in some
principled way. The thought here is not that a moral theory will
provide a resolution to our conflict that each of us will be
perfectly happy with (indeed any compromise is likely to be
non-ideal for each of us, by requiring some sacrifice of
interest); the thought is just that a good moral theory will
have something more to say about a conflict of interest we have
than just telling us, in effect, to fight it out for
ourselves.
- Note that this is not a complaint that is based on the
self-interested nature of ethical egoism. The same problem would
beset ethical altruism—the theory that says that one should always
maximally advance others’ interests. Two ethical altruists
could find themselves at loggerheads as surely as two ethical
egoists could.
- In reply to this complaint, the ethical egoist
might say that this hope that many people have, for a morality
that will settle conflicts of interest in some other way than
saying “fight it out for yourselves,” is a misguided one:
that the best account of morality (ethical egoism) just will not
serve this function that many people (misguidedly) think
morality ought to serve. Whether this reply succeeds depends,
ultimately, on one’s view of how important this function (of
providing principled resolutions of conflicts) is to the role
that morality is supposed to play.
- second argument
- Another argument against ethical egoism is
related to the first, in that it also involves ethical
egoism’s refusal to offer a compromise in cases of conflicts
of persons’ interests. But this argument differs from the
first in that the complaint of this second argument is not just
that ethical egoism refuses to adjudicate cases of conflict, but
that ethical egoism actually has contradictory implications. The
claim is that in telling me to maximally advance my interests,
the theory is also implicitly telling you not to stand in the
way of my advancing my interests; and this latter injunction
conflicts with what the theory also tells you, namely, that you
ought to maximally advance your interests (which, we have
supposed, conflict with mine).
- This argument requires the assumption that
when ethical egoism tells me to maximally advance my interests,
it is also telling you not to interfere with that. But this
assumption is not part of ethical egoism. What ethical egoism tells you to
do depends entirely on your interests (just as what it tells me
to do depends entirely on my interests). It simply tells you to
maximally advance your interests, regardless of how that may
affect my interests. The theory may counsel conflicting courses
of action, but it does not tell any one of us that doing
something (such as maximally advancing one’s own interests) is
both right and wrong.
- third argument
- Ethical egoism is unacceptably arbitrary, in that it allows
(indeed, requires!) each person to privilege himself or herself
over others.
- The previous two objections to ethical egoism
are, in a sense, structural rather than substantive:
that is, they are concerned not so much with the content of the
theory, but with the way in which its structure prevents it from
playing the role that many people think a moral theory should
play (the role of providing logically consistent, principled
resolutions of conflicts of interest). A third objection,
though, is substantive: concerned with the content of the
theory.
- This objection is very simple. It begins with
the
observation, or (if we regard it as at all controversial) the
claim, that there is no morally relevant difference between
oneself and others, generally. Then it moves to the inference
that, because of this, one is not entitled to
give special treatment to oneself in making moral decisions any
more than one is entitled to give special treatment to members
of one’s race or one’s sex. So, if this is true, then what
ethical egoism allows—indeed, requires—each person to
do (i.e., give absolute priority to oneself) is just flat-out
immoral.
- EMP, chapter 7: “The Utilitarian Approach”
- section 7.1: “The Revolution in Ethics”
- the principle of utilitarianism
- According to utilitarianism, in any given
circumstance, the right act is the one that produces the most
welfare, for all creatures capable of faring well or badly, from now
onwards into the indefinite future.
- This leaves unanswered the question of exactly what creatures have
welfare and which ones don’t. Different utilitarians answer this
question differently, leading to subtly different versions of
utilitarianism.
- rival approaches to ethics that utilitarians reject
- divine commands
- non-divine general rules (such as the ten
commandments would be if they were not said to be God’s
commands)
- natural law
- utilitarianism’s five components
- consequentialism: The rightness and wrongness
of acts depends entirely on the goodness and badness of their consequences.
- welfarism: What makes consequences good is
that people and other animals capable of faring well or badly
are faring well: their welfare is raised or enhanced. Welfare is
perhaps most often understood in terms of happiness, but there
are other
interpretations as well.
- universalism: The welfare of everyone capable
of having welfare (capable of faring well or badly) counts, and
counts equally.
- aggregation: The way in which everyone’s
welfare counts is by being lumped together into one total.
- maximization: Once it’s (conceptually)
lumped together into one total, welfare is to be made as large
as possible: this is what right acts do.
- consequentialism and welfarism
- These are the two most important of the five components of
utilitarianism. They must be understood and kept distinct.
- Consequentialism is a thesis about what makes acts right and
wrong. It says that the rightness and wrongness of acts depends
entirely on the goodness and badness of their consequences. But
consequentialism does not say what makes consequences good and bad.
- Welfarism is a thesis about what makes consequences good and bad.
Unlike consequentialism, it does not say what makes acts right and
wrong. It only talks about what makes consequences good and bad.
- So one can be a consequentialist but not a welfarist by
maintaining that the rightness and wrongness of acts depends on the
goodness and badness of their consequences (that’s what it takes to be
a consequentialist) and then deny welfarism. To deny welfarism, one
might claim that what makes consequences good and bad doesn’t have
anything to do with welfare; rather, it has to do with following God’s
commands (no matter how they affect welfare), or with acquiring
knowledge (even if it doesn’t contribute to your welfare), or with
living a long time (even if you’re living a life of no welfare, or
negative welfare).
- Similarly, one can be a welfarist but not a consequentialist by
maintaining that the goodness and badness of consequences depends on
welfare (that’s what it takes to be a welfarist) and then deny
consequentialism. To deny consequentialism, one might claim that the
rightness and wrongness of acts doesn’t have anything to do with
consequences; rather, it is entirely a matter of keeping promises
(regardless of the consequences of keeping promises), or entirely a
matter of acting according to the golden rule, or entirely a
matter of something else not reducible to consequences. One could deny
the moral significance of consequences even while saying to the
welfarist, “Yes, if you’re interested in what makes consequences good
and bad, welfare is the answer.”
- a method for applying the principle of utilitarianism to a
particular situation
- List the possible acts.
- For each act, see how much net good it would do.
- Identify the act that does the most net good.
- That’s the right thing to do.
- section 7.2: “First Example: Euthanasia”
- a traditional religious view of euthanasia
- It is part of the Christian tradition, and widely
believed, that intentionally killing an innocent person is
unequivocally wrong.
- This applies even when the person wants to be killed, as in cases
of euthanasia.
- the utilitarian approach
- The utilitarian approach involves
weighing the costs and benefits of the specific act of euthanasia
being contemplated and then doing it if, and only if, doing it has a
greater balance of benefits versus costs than not doing it.
- In a typical case, the benefits of euthanasia
involve the end of suffering for the person being euthanized.
- The costs involve less time that person has
alive (if being alive is any benefit to them at all), less time
for others to enjoy or benefit from that person being alive, and
a possible lessening of the respect for life that we have in
society.
- So utilitarians will approve of some acts of
euthanasia and disapprove of others, depending on the intensity
of the person’s suffering, how badly others want them to
remain alive, the effects on other people’s respect for life,
and any other consequences having to do with sentient
creatures’ welfare.
- Would a benevolent God allow euthanasia?
- It is an interesting puzzle for those who believe that God both
(1) is benevolent and (2) disapproves of euthanasia to explain what
sort of good God wishes for us if it includes suffering that could be
relieved by euthanasia.
- The resulting explanation might offer an interesting challenge to,
or non-standard interpretation of, utilitarianism's “welfarism”
component.
- utilitarianism and liberal social policies
- On social issues generally, utilitarians tend to be fairly
liberal.
- This is because they tend to think that if some act (such as sex,
or watching pornography, or doing drugs) doesn’t substantially affect
other people besides the ones directly involved, the the way to
maximize overall welfare (conceived of as overall happiness) is to let
people do what they want, since people tend to want to do those things
that make them happy.
- section 7.3: “Second Example: Nonhuman Animals”
- a traditional religious view of nonhuman animals
- It is part of the Christian tradition, and widely believed, that
animals have no moral standing: that the only moral reasons for not
harming animals stem from the moral standing of humans.
- They are believed not to have moral standing because they are not
rational (in the way that humans are), because they cannot speak (in
the way that humans can), and/or simply because they are not human.
- the utilitarian approach
- Utilitarians, however, deny that these factors are
relevant to moral standing. On the utilitarian view, something gets
moral standing just by being able to suffer (or, put not so
drearily, by being able to be happy).
- This does not mean, of course, that nonhuman
animals are to be treated just like humans. Since the differences
among humans and nonhuman animals result in differences in
capacities or susceptibilities to experience enjoyment or suffering,
and since enjoyment or suffering is ultimately what matters
(according to utilitarians), many differences in treatment are
acceptable. For example, it is not the case that we ought to provide
education to cats, as we ought to provide it to humans, because (as
far as we know) cats cannot benefit from education in the way that
humans can. And we need not provide food and shelter to bears,
because they (unlike humans) seem to do all right in the wild.
- Utilitarianism does imply, however, that there are
some radical changes that must be made in contemporary western
practices. For example, we must reduce the amount of suffering
involved in animal experimentation and (on a much larger scale)
factory farming.
- further cases for considering the implications of utilitarianism
- the transplant case as an embarrassment to utilitarianism
- the trolley cases as a vindication of utilitarianism
- EMP, chapter 8: “The Debate over Utilitarianism”
- section 8.1: “The Classical Version of the Theory”
- the three components objectors normally focus on
- welfarism (understood hedonistically, or in terms of happiness)
- consequentialism
- universalism (also known as impartiality)
- one examined in each of next three sections
- section 8.2: “Is Happiness the Only Thing That
Matters?”
- welfarism understood hedonistically, or in terms of happiness
- Most utilitarians believe that a person’s
welfare is determined by how happy he or she is, over the course of
his or her whole life. This view is known as hedonism. Hedonism,
then, is a specific version or interpretation of welfarism.
- Note
that, as understood here, hedonism is not the view that “base”
pleasure is all that matters, or that only short-term consequences
matter. Hedonism as understood here encompasses all sorts of things
that make people feel good, and takes the long term as well as the
short term into account. Rachels presents two cases that
challenge the plausibility of hedonism.
- objections to welfarism understood hedonistically (i.e., in terms
of happiness)
- One objection to hedonism is conveyed in the
example of the injured piano player who, because of her injury,
becomes unhappy. According to hedonism, what’s bad about this
situation is that the woman is unhappy. If she were not made unhappy
by her injury, then the situation would not be bad; the injury
itself, until it raises or lowers someone’s welfare (= happiness),
is morally neutral. One might challenge this view by saying that the
injury itself is bad, and that the woman’s reaction of unhappiness
is simply an appropriate response to this already-bad situation.
- A similar objection to hedonism is conveyed in the
example of the person whose “friend” constantly ridicules him
behind his back. If the person never finds out about this, and has
his happiness impaired by his “friend’s” betrayal in no other
way, then hedonism implies that the person is just as well off—is
faring just as well—as if the other person were loyal and
respectful. One might challenge this view by maintaining that one is
harmed by being ridiculed to others, even if it never comes back and
affects one’s own state of mind.
- The idea underlying these objections to hedonism can be put like
this: Is how a person feels—the “felt quality” of his or her
experience, or how his or life feels “from the inside”—all that
ultimately matters, as far as his or her well-being is concerned? Or,
on the contrary, can a person’s well-being be affected by things that
never have any upshot for the felt quality of his or her experience?
Robert Nozick’s idea of the experience machine can be used to sharpen
our intuitions about this.
- how utilitarians respond
- In response to these objections to hedonism, some utilitarians
stand by it, as the best version or interpretation of welfarism.
- Others, finding the challenges to hedonism too serious to
tolerate, adopt different accounts of welfare (such as objective-list
accounts and preference-satisfaction accounts). We will not worry
about these other views; to keep things simple, we’ll continue to
assume that what utilitarianism has in mind, when it comes to welfare,
is happiness.
- section 8.3: “Are Consequences All that Matter?”
- transition from previous section
- The previous section dealt with challenges to
utilitarianism’s welfarist component—or, to be more
precise, with challenges to most utilitarians’ hedonistic
interpretation of welfarism.
- This section deals with challenges to
utilitarianism’s consequentialist component. Note that
these challenges are logically independent of the previous ones: as
we saw in connection with chapter 7, a
person could accept welfarism and even hedonism while rejecting
consequentialism, or accept consequentialism while rejecting
welfarism or hedonism (or both).
- objections to consequentialism
- One has to do with justice. The example
of the person who must bear false witness in order to bring
about the best possible results shows that a consequentialist
theory may, on occasion, require a person to behave in a way
that will bring about serious injustice.
- A second objection has to do with rights.
Consequentialist theories permit people to violate others’
rights, as long as their doing so has benefits that outweigh the
costs to the person whose rights are violated (and the costs to
anyone else harmed by the rights violation, of course, if anyone
else is harmed).
- A third objection has to do with what called backward-looking
reasons. If you promise something to someone and then would
rather not keep your promise, then consequentialism says you
don’t have to keep it, as long as the inconvenience to you of
keeping it is greater than the inconvenience the other person
will suffer if you don’t keep it. Of course, in estimating the
inconvenience of the other person, you have to take into account
the fact that if you don’t keep your promise, then they may
well be very disappointed, because you promised,
but once you’ve taken the promise into account in terms of
benefits and harms to the persons involved, the promise has no
additional moral weight (according to consequentialism). The fact
that you promised is a backward-looking reason, and
consequentialist theories—being concerned, of course, with
consequences—are exclusively forward-looking.
- a case to consider: the deathbed promise—$2 million to the
Yankees, as promised, or to famine relief?
- section 8.4: “Should We Be Equally Concerned for Everyone?”
- transition from previous section
- Whereas sections 8.2 and 8.3 dealt with welfarism (understood in
terms of happiness) and consequentialism, respectively, this section
deals with utilitarianism’s universalism, or impartiality.
- This is the idea
that everyone counts equally, from the moral point of view.
- objections to universalism, or impartiality
- One objection is that utilitarianism's commitment to
impartiality makes it too
demanding. By requiring us always to maximize overall welfare,
it implies that we are acting wrongly whenever we spend time or
money on ourselves or other people close to us instead of on the
most needy people that could benefit from our time and money
(since they presumably stand to gain more from our time and
money than other, better-off, people).
- A second objection to utilitarianism is that
its commitment to impartiality forbids us to have and to act on strong personal
relationships such as the ones that most people think are
extremely important. It implies that one ought not to benefit one’s
spouse or child if one can, with the same cost, benefit others
more—even if the other people are strangers, and the benefit to
them is only slightly greater than the benefit to one’s spouse or
child.
- section 8.5: “The Defense of Utilitarianism”
- the attempt to accommodate the intuitions of common-sense morality
- One reply to the foregoing objections is to claim
that the cases in which utilitarianism would require injustice,
rights violations, or promise breaking are exceedingly rare,
precisely because justice, respecting rights, and keeping promises
are so important to human well-being.
- This reply claims, in effect,
that although it is possible, in principle, for cases such as those
described to arise, things are different in practice: in practice
injustice, rights violations, and promise breaking tend to lower
overall well-being. Therefore, utilitarianism tends to forbid such
behavior and ultimately supports
these common-sense moral commitments rather than undermining them.
- This reply, however, is not very successful, since there can still
arise at least some cases in which utilitarianism conflicts
with our common-sense moral intuitions; and no excuse for
utilitarianism’s giving the “wrong answer” in these cases has
been provided.
- the retreat to rule utilitarianism
- Another reply to the foregoing objections is to
propose a modified form of utilitarianism known as rule
utilitarianism. Whereas standard utilitarianism (also
known as act utilitarianism) says that the right thing to do is
whatever has the best consequences, in terms of overall welfare, rule
utilitarianism says that the right thing to do is whatever would be
required by the rules whose general acceptance would have the best
consequences, in terms of overall welfare.
- To see how a rule utilitarian would respond to one of the
objections above—the one having to do with promises, for example—we
have to see whether a rule requiring the keeping of promises would, if
generally accepted, do more harm that good. If it would (as seems
likely), then promise-keeping is always required, even in those cases
in which better consequences would result from the breaking of a
promise.
- The upside of this reply, from the point of view of
utilitarianism, is that it answers the foregoing objections pretty
well; the downside is that it is really a different theory, with a
different content from that of standard utilitarianism.
- the (partial) rejection of common-sense morality
- A third reply is to stand by act utilitarianism (in contrast to
the second reply’s retreat to rule utilitarianism) and, instead of
trying to downplay the conflict between what act utilitarianism
requires and what we think, intuitively, morality requires (as the
first reply does), argue that the conflicts between the implications
of act utilitarianism and “common-sense morality” show not (1) that
something is wrong with act utilitarianism, but (2) that something is
wrong with “common-sense morality.”
- On this view, our intuitive understanding of what morality
requires may be in need of substantial revision, and we can see just
what kind of revision it may be in need of by checking it against act
utilitarianism.
- This reply, then, rejects the method by which normative-ethical
theories are usually judged or evaluated, since that method appeals to
our common-sense intuitions about morality as beliefs that any good
normative-ethical theory ought to agree with.
- worksheet: “Utilitarianism: Its Implications”
- “lessons” of utilitarianism (general implications of utilitarianism)
- You ought to put resources where they will do the most good.
- Putting resources where they will do the most good can seem, in some
cases, unfair.
- Rights must be respected when and only when violating them would
cause more unhappiness than happiness.
- Something that is wrong in one situation can be made right if
additional people become involved and they derive happiness from the action
in question.
- The results of things you could have done in the past, but didn’t,
are irrelevant.
- Once you’ve taken into account disappointed expectations, loss of
trust, the developing of bad habits, and other things that will have an
impact on the future, then the fact that an act is the breaking of a
promise has no further moral weight.
- EMP, chapter 9: “Are There Absolute Moral
Rules?”
- section 9.1: “Harry Truman and Elizabeth Anscombe”
- One approach to morality is to regard it as consisting of absolute
rules—rules that must not be broken, no matter what.
- Another is to say that even if there there are any valid moral rules
(whether absolute or not), morality ultimately depends on the consequences
of various courses of action.
- section 9.2: “The Categorical Imperative”
- categorical imperatives vs. hypothetical ones
- Kant observed, more explicitly than most
theorists, that morality consists of imperatives, or commands, of a
certain kind. This kind of imperative can be introduced by way of a
distinction between hypothetical and categorical imperatives.
- Hypothetical imperatives are imperatives such
as “If you want to be healthy then you should eat more
vegetables” and “If you want to get to the movies on time
you ought to leave now” and “If you want your headache to get
better you should stop watching American Idol.” Imperatives such as these do not bind
absolutely; that is, it’s not the case that eating more
vegetables is something you ought to do, period—it’s
something you ought to do only if that would further some
objective of yours, such as getting healthier. As a result, these
imperatives are considered hypothetical imperatives: they are
binding on you only on the hypothesis that you aim at the
objective they serve.
- A hypothetical imperative is a command that is meant to
apply only on the assumption that you aim to achieve a certain
purpose.
- Categorical imperatives, in contrast, bind
absolutely. If I tell you that you ought not to kill innocent
people without their consent, then I mean it categorically: that
is, without exception, or regardless of whether or not your
objectives will be advanced by your compliance. Note, then, that
whether an imperative is hypothetical or categorical is, in a
sense, a function of how it’s meant. If someone tells you to
do something, but then backs off upon finding out that you
don’t have objectives that would be advanced by doing that
thing, then the imperative is hypothetical. If the person tells
you to do something, and tells you you have to do it even if it
doesn’t serve any of your objectives, then it’s categorical.
- A categorical imperative is a command that is meant to
apply regardless of the purposes you aim to achieve.
- the significance of this distinction: All moral judgments are
categorical imperatives.
- The point of distinguishing hypothetical and
categorical imperatives is to set up the observation that morality
consists of categorical, not hypothetical, imperatives. That is, you
can’t escape your duty to be moral, your duty to comply with the
commands of morality (whatever those commands of morality turn out
to be), by pointing out that you don’t have objectives that would
be served by your compliance. Rather, you have to comply,
regardless of your objectives. Now this claim—that morality
consists of categorical, not hypothetical, imperatives—is not a
distinctively Kantian claim; on the contrary, Kant is just
articulating an intuition about morality that just about everyone
has about the inescapability of moral commands (again, whatever
those commands turn out to be).
- The fact that morality consists of categorical
imperatives poses a bit of a puzzle regarding how any moral command
can be justified. For it’s fairly clear how hypothetical
imperatives can be justified: their justification derives from the
fact that the person being commanded aims at some objective; and if
he doesn’t like the command, he is free to escape it by disavowing
the objective. So the justification of hypothetical imperatives
doesn’t seem all that problematic. But how can categorical
imperatives be justified? How can someone legitimately tell you that
you have to do something, regardless of your objectives? So,
since moral commands are categorical imperatives, how can moral
commands be justified?
- the categorical imperative
- Kant, then, saw moral philosophy as essentially
the search for some valid categorical imperative(s): that is,
one or more commands that you would (rationally) have to comply
with, regardless of your particular objectives. Of course, others
before him had made their own proposals (though they didn’t
necessarily conceive of them as categorical imperatives);
utilitarians, for example, had said that one must always do whatever
will have the best consequences, in terms of overall well-being. But
Kant rejected this and other moral principles that had been
proposed, and he proposed the following: “Act only according to that
maxim by which you can at the same time will that it should become a
universal law.” Because Kant was so attentive to the categorical
character of moral imperatives, this statement is known as “the
categorical imperative” (even though other moral theorists propose
their imperatives as equally categorical).
- The categorical imperative as stated above is a
translation from Kant’s German expression. As a result, and
possibly as a result of Kant’s own writing style in German, the
grammar of the English statement is oddly awkward. Here’s (in my
view) a more ordinary-sounding (if slightly more wordy) rendering:
“In any given situation, act only according to a maxim that you
could, while acting on it, consistently also will to be a maxim that
everyone would feel free to act on.”
- The meaning of the categorical imperative can be
brought out by considering the steps that one would take to apply it
in a particular situation:
- Figure out the maxim on which you would be
acting. That is, figure out the rule or principle that you would
be acting on. Some examples of maxims (for various
circumstances) are “Eat when you’re hungry” or “make a
false promise when it will help you get out of a jam” or
“Hire the most qualified person for the job.” Your maxim
could even be the utilitarian rule of “Do what will maximize
overall welfare.”
- Imagine that everyone felt free to act on the
maxim on which you’re thinking about acting.
- If the imagined outcome (when everyone feels
free to act on this maxim) is acceptable to you—something you
can will—then your act is o.k., morally. If not, then it’s
immoral.
- Notice, then, the following structural difference
between utilitarianism and Kantianism. To act in accordance with
utilitarian morality, you look at all your options, and then choose
the best one (where ‘best’ is understood in a certain way). To
act in accordance with Kantian morality, you don’t have to survey
all your options and then choose one that is somehow defined as the
best; you just look at your options one by one—and you can go in
any order you want—and all you have to do is find one that
complies with the categorical imperative: that is, find one that
passes the multi-step test described above.
- examples
- making a false promise
- mobile infrared transmitters
- the main idea: It is immoral to make an exception of yourself
in taking advantage of a system that needs widespread cooperation in
order to function.
- section 9.3: “Absolute Rules and the Duty Not to
Lie”
- Kant’s argument that the categorical imperative prohibits lying in
all circumstances
- Kant interpreted the categorical imperative to imply that there
were certain more specific absolute rules, such as a rule against
lying.
- To see how he might have thought this, consider the maxim “Lie
whenever it suits your purposes.” Now imagine what it would be like if
everyone felt free to act on this maxim. In such a world people would
not trust other people, which of course would be a bad thing; and to
compound the problem, the lack of trust among people would mean that
the maxim itself had become useless: for the only way for a lie to
succeed is for people to trust the person telling it. As a result, one
cannot regard, as acceptable, everyone’s feeling free to act on this
maxim, and so it is immoral for anyone to act on it.
- reasons for thinking the categorical imperative does not prohibit
lying in all circumstances
- Whether the categorical imperative really does
imply blanket prohibitions on things like lying is a disputed
question. (The fact that Kant said it did does not settle the
question; the fact that he made up the categorical imperative does
not mean that he gets to decide what it implies and what it does
not. It’s up to each person, reasoning logically, to draw their
own inferences—as sincerely as they can, of course—from the
categorical imperative.)
- To see why, consider a maxim not as
permissive as “Lie whenever it suits your purposes,” but a
stricter one, such as “Lie whenever doing so would save someone’s life and have no effect on
others, except possibly for inconveniencing a would-be murderer.”
(One might think to formulate such a maxim in the special
circumstances of the inquiring-murderer case that Rachels
describes.) Would the world be such a bad place if everyone felt
free to act on this maxim? It licenses lying in such rare
circumstances that its universal adoption would not seem to result
in the same problems that would result from the universal adoption
of the more-permissive maxim considered earlier. So there is reason
to doubt that the categorical imperative has the sweeping
implications that Kant thought it did.
- some problems with Kant’s moral theory
- The
categorical imperative seems to give conflicting verdicts in
regard to the same act, depending on how it’s described (i.e.,
what someone says is its maxim). (This is probably the biggest
problem with the categorical imperative.)
- Kant
seems to have been mistaken in claiming that the categorical
imperative prohibits lying in all circumstances, or that it
implies any such general rules.
- If
Kant were right in claiming that the categorical imperative prohibits
lying in all circumstances, then the categorical imperative
would seem, to many people, to be overly strict.
- The
way in which the categorical imperative is used to evaluate each
of an agent’s options one by one (instead of choosing the
“best” of the options, as utilitarianism and egoism do)
raises the possibility of each of an agent’s options failing
the test of the categorical imperative—which would seem to be
a defect in the categorical imperative. (For more on this, see
the notes that go with section 9.4, below.)
- problematic implications of the categorical imperative (or,
problematic implications of a ban on making an exception of yourself)
- examples of not-immoral cases of making an exception of yourself
- paying off no-annual-fee credit cards in full (and, so, paying no
interest)
- buying misspelled items on E-bay and reselling them at higher
prices
- a further problem with the categorical imperative
- hard to figure out its implications for certain kinds of cases,
because of the difficulty of formulating the maxim
- two ways of formulating a maxim in the case of thwarting an
inquiring murderer
- It’s o.k. to lie whenever you want.
- It’s o.k. to lie to save the life of an innocent person if you
harm no one except a would-be murderer.
- Kant’s other remarks on lying
- Kant himself considered the problem of the inquiring-murderer
case, and he tried to justify his absolute prohibition on lying with a
second argument, having to do with the bad consequences that might
result from lying.
- Although Rachels spends some time on this argument, it is such a
peripheral (and, as Rachels says, weak) part of Kant’s approach to
morality that we do not need to worry about it.
- section 9.4: “Conflicts between Rules”
- We saw above that the inquiring-murderer case
shows that a moral theory including absolute rules (such as an
absolute rule against lying) has, for that reason, some
counter-intuitive implications. Another reason for doubting the
plausibility of a moral theory including absolute rules arises from
the possibility of conflicts between rules.
- To see what problem might arise from the
possibility of conflicts between rules, note that if a moral theory
prescribes more than one absolute rule, then it will be possible, at
least in principle, for a case to arise in which the agent cannot
avoid violating at least one of the rules. (Since the rules are absolute,
neither will yield to, or be overruled by, the other.) Such a case
would be one in which, regardless of what the agent chooses to do, the
moral theory in question would imply that the agent is acting
immorally. And this seems implausible: it seems that a moral theory
must say, in any given case, that there is something that the agent
could do that would be a (or the) moral thing to do.
- To better understand this phenomenon of conflicts
between rules, note that it arises from the “structural”
difference between utilitarianism and Kantianism noted earlier. The
former says that the right act, in any given circumstance, is the
one that has the best consequences (in terms of overall welfare). So
no case can arise in which it doesn’t select some act (maybe more
than one, if there’s a tie) as right. But Kantianism doesn’t
select an act as right by surveying all the options and selecting
the “best” one as the right one; instead, it looks at them one
by one, and judges each one independently of the others. As a
result, it’s possible for every one of them to fail the Kantian
test, and thus for an agent to have no morally permissible
thing to do in a particular circumstance.
- section 9.5: “Another Look at Kant’s Basic Idea”
- A basic idea at the core of Kant’s theory is
that of impartiality: the idea that one must treat everyone equally,
and not give oneself special permissions, when it comes to how one
acts (that is, permissions that one would not be willing to extend to
others). This is implicit in the thought that one should act only on
those maxims that one would be willing for everyone to act on.
- As we saw when we considered the idea of
impartiality in chapter 1, an idea at the core of the idea of
impartiality is that of acting for reasons: if my hunger is a good
reason for me to steal your lunch, then your hunger is an equally
good reason for you to steal my lunch. In this way, impartiality
can, in effect, be derived from the very idea of acting for good
reasons. One way of regarding Kant’s theory, then, is as an
impressive, but ultimately problematic (largely because of Kant’s
insistence on deriving absolute rules from the categorical
imperative), attempt to derive an entire moral theory from the
simple idea of acting for reasons.
- preview of chapter 10, “Kant and Respect for Persons”
- the second “formulation” of the categorical imperative
- what this means: different words, same implications
- Do the formulations have the same implications?
- It’s not up to Kant to just decide that they do or do not.
- Rather, it’s a matter of logic whether they do or do not.
- intrinsic value vs. instrumental value
- Something has intrinsic value if it is valuable in and of itself.
- Something has instrumental value if it is valuable only as a means
(an “instrument”) for obtaining something intrinsically valuable.
- review of chapter 10
- section 10.1: “The Idea of Human Dignity”
- the second formulation of the categorical imperative
- The categorical imperative presented above—the statement about
acting only on maxims that it would be o.k. for everyone to act on—is
the fundamental principle of Kant’s moral theory.
- But Kant thought that the very same principle could be expressed
in an entirely different form of words, as follows: “Act so that you
treat humanity, whether in your own person or in that of another,
always as an end and never as a means only.” This is, according to
Kant, another formulation of the categorical imperative.
- what it means to call this another “formulation” of the
categorical imperative
- It is important to understand what is meant by
saying that this is another formulation of the categorical
imperative.
- What is meant is that the two principles (the two
formulations) have exactly the same meaning: that is, if one
person accepted the first formulation as the fundamental
principle of morality, and the second person accepted the second
formulation as the fundamental principle of morality, then
(assuming each understood his or her principle correctly) each
would have the very same conception of morality: there would be
no disagreement between the two persons in regard to what acts
are morally permitted, what acts are immoral, and so on.
- why it is not up to Kant to say the two formulations are
equivalent
- We saw earlier that it is not up to Kant to
say what the specific implications of the categorical imperative
(in its first formulation) are in regard to questions such as
lying: rather, it’s a matter of reasoning, and logic, to
ascertain what the implications of that principle are.
- Similarly, here, it’s not the case that the two formulations
of the categorical imperative are equivalent just because
Kant said they are; rather, it’s a matter of reasoning and
logic (and analyzing the meaning of the two formulations) to
ascertain whether they’re equivalent. Kant might have been
wrong in saying that they’re equivalent (the jury is still
out, I think, among Kant scholars), just as he might have been
wrong (indeed, probably was wrong) in saying that the first
formulation prohibits lying in all possible circumstances.
- the meaning of the second formulation of the categorical
imperative
- So what is the meaning of this second formulation
of the categorical imperative—the one having to do with treating
humanity always as an end and never as a means—and why might it be
thought true? That is, if we put aside its supposed equivalence to
the first formulation, and try to apply it to moral issues directly,
what are its implications, and why should we regard it as (morally)
binding?
- Basically, the idea is that we must always
treat humans (others obviously, but also ourselves) with
respect, and as beings whose objectives are worth pursuing just
because they are humans’ objectives: that is, what humans
aim at, and their interests and welfare generally, are worth
promoting, and nothing else is. So, you ought not to injure
people, or lie to them, or manipulate them, because then you
would be interfering with the achievement of their objectives,
or with their well-being; indeed you are very likely treating
them as an instrument, or tool, for your own
objectives.
- Why did Kant think that treating humans as
ends was so important? Because humans, he thought, are the only
beings that have rationality: the ability to value things and
make decisions accordingly. Inanimate objects, of course, cannot
do either of these things, and lesser animals may engage in
apparently purposeful conduct (such as a dog digging for a bone it
buried); but only humans, Kant thought, exhibit genuine
rationality. As a result, they (and only they) must always be
treated as ends, and never as means only.
- Understanding the reasoning behind this
formulation of the categorical imperative helps to clarify its
meaning. Because of its basis in Kant’s treasuring of
humans’ rationality, it may be understood as saying that you
must always respect, and never interfere with or subvert,
others’ rationality. So, to repeat an example that
Rachels gives: you must not lie to other people in order to get
some money from them, because then you are subverting their
rationality: you are preventing them from reasoning well about
the situation (your need for money) by giving them false
information. But if you tell them your reason for asking, then
you are respecting their rationality, because you are giving
them the information they need in order to make a (rational)
decision on their own. (This also shows, by the way, that
treating others as ends and never as means only does not rule
out asking others for help; it just rules out manipulating or
coercing others into helping you.)
- second formulation of the categorical imperative: Act so that you
treat humanity, whether in your own person or in that of another, always
as an end and never as a means only.
- To treat something as a means is to manipulate it.
- gist of the second formulation: Don’t manipulate people. (Never
subvert their rationality.)
- a method for applying this to a particular situation
- Focus on a particular act that you are thinking about doing.
- See whether it would involve treating anyone (yourself or
someone else) purely as a means.
- If it would, then it is wrong. Otherwise, it’s allowed.
- one implication: Never lie, not even to the inquiring murderer.
- section 10.2: “Retributivism and Utility in the
Theory of Punishment”
- retributivism
- The emphasis that Kant put on the rationality and
“dignity” of humans, and what it means to treat someone as an
end, can be seen in Kant’s attitude towards punishment. But before
getting into this topic (which is the topic for the rest of the
chapter), it is important to note that Kant’s moral theory is not
especially preoccupied with punishment, and should not be thought of
as primarily a theory of punishment. The reason for considering
punishment at such length here is that in doing so, we can come to a
better understanding of what Kant thought, and how he dissented from
the utilitarian view, on the subjects of
- the moral importance of individuals’
well-being
- what it means to treat someone with respect
- Kant endorsed retributivism: the idea that those
who have engaged in wrongdoing deserve to be treated badly in
return.
- the utilitarian view
- The retributivist approach to punishment can be understood by
considering a rival view, that of the utilitarians. Here are the
essential ideas of utilitarians’ attitude towards punishment.
- All harm to persons, including punishment, is
inherently evil.
- Although this does not mean that punishment is
always immoral, it does mean that punishment is permissible only
when its benefits outweigh its costs. While its costs include
the pain to whomever it is inflicted on (as well as the costs of
administering the penal system), its benefits may include the
following:
- making innocent people safer by getting
dangerous people off the street
- deterring would-be wrongdoers
- rehabilitating wrongdoers
- If it were possible to “punish” and
rehabilitate criminals nearly painlessly (such as with a very
brief and pleasant, but nonetheless very effective
rehabilitation program) so that they could be quickly caught and
released, and would be law abiding thereafter, then that would
be great: the displeasure and inconvenience that people
experience when we punish them is inherently bad, and must be
avoided unless they are necessary to secure even greater
benefits.
- section 10.3: “Kant’s Retributivism”
- Kant’s objections to the utilitarian approach to punishment
- Rehabilitation typically involves treating
people as means, since it involves molding them into the kind of
people that we would like for them to be, instead of helping
them to become the people they might autonomously decide that
they would like to be. There may be the occasional
rehabilitative program that adequately respects individuals’
rationality, by helping them choose their objectives freely; but
in general, rehabilitation is manipulation.
- Punishment should be inflicted simply because
someone has done something wrong, and not for any further
reason, or for anyone’s good. And the degree or intensity of the punishment
should be proportional to the seriousness of the offense, not
determined by the question of how much punishment will do the
most good, in terms of benefits versus costs.
- a notable implication of retributivism
- An interesting consequence of this retributivist
view is that a wrongdoer ought to be punished even if he will never
be able to harm anyone again.
- Recall the example of the murderer being left on the island.
- why Kant thought that only the guilty may be punished
- According to Kant, not punishing a wrongdoer
would involve refusing to recognize that person as a rational
agent who freely chose to do what he did. It would be, instead,
to regard the person as having been incapable of choosing to do
wrong or not. And it is disrespectful to treat someone as if it
were not up to him to act in that way or not. In contrast,
holding someone accountable is a way of showing respect.
- Kant also thought that punishing wrongdoers
was a way of showing respect for them by treating the maxim that
seemed to underlie their wrongdoing as a rule that was
appropriate not only for them to act according to, but also for
us to use in deciding how to treat them. It’s almost as if
we’re saying to them, “O.k., you think that harming someone
just for your own convenience is a good way to behave? Then
we’ll do that to you.”
- purpose of digression into retributivism
- Recall the purpose of this long digression into
Kant’s retributivism.
- The purpose is to better understand what Kant thought, and how
he disagreed with utilitarians, on the subjects of
- the moral importance of individuals’ well-being
- what it means to treat someone as an end, or with respect
- preview of chapter 11, “The Idea of a Social Contract”
- In a prisoner’s dilemma, each person has two available actions, or
“moves” in the game:
- One is typically called one of the following: cooperate, collude, behave altruistically.
- The other is typically called one of the following: not cooperate,
defect, cheat, behave selfishly, behave self-interestedly.
- In a prisoner’s dilemma
- Each person’s most advantageous move is to defect, regardless of
others’ moves.
- Each person’s outcome is more advantageous when all cooperate than
when all defect.
- review of chapter 11
- table
- you on left-hand side, other(s) along top
- two moves for each: cooperate and defect
- upper-left entry: pretty good for both/all
- upper-right entry: terrible for you, great for other(s)
- lower-left entry: great for you, terrible for other(s)
- lower-right entry: pretty good for both/all
- worksheet on prisoner’s dilemma
- the motivation for social-contract theory
- In many areas of human interaction, just letting everyone look out for
themselves leads to things’ working out o.k. For example, if you let people
pursue the kinds of educations they want (assuming they find willing
teachers), and take the kinds of jobs they want (assuming they find willing
employers), and marry whomever they want (assuming they find willing
spouses), and let people spend their money like they want, then in many
cases things turn out all right. Morality isn’t needed to make sure these
sorts of situations turn out o.k.
- But there are some areas of human interaction in which just letting
everyone look out for themselves leads to worse consequences, for everyone,
than if everyone were to make some sort of sacrifice for others. These are,
of course, prisoner’s dilemma situations. Social-contract theories regard
these situations as a problem, and see morality as the solution to this
problem. They way they formulate their solution to this problem is based on
the realization that if the people in such situations could agree on some
rules that everyone would follow in such situations, then they would all
agree that they should be bound to cooperate, instead of all being free to
defect.
- social-contract theory’s moral principle and the method for applying it
- social-contract theory’s moral principle: An act is right if it would
allowed by rules that people would agree on for their mutual benefit.
- the method for applying it: For any action you are thinking about
doing (or trying assess, morally), imagine whether it is the sort of thing
that people would agree should be allowed, if people came together and
agreed on rules of morality for their mutual benefit.
- some general implications of social-contract theory
- In any prisoner’s dilemma situation, everyone ought to cooperate, not
defect.
- You don’t have to cooperate when too many others are defecting.
- No one has any moral obligations to anyone it is not advantageous for
them to cooperate with.
- The point of morality is mutual advantage, not benefiting some at the
expense of others.
- implications for specific problems
- a newspaper reporter making up stories
- using drugs recreationally
- confidentiality of medical records
- relieving worldwide poverty
- section 11.1: “Hobbes’s Argument”
- Thomas Hobbes, the first major social-contract theorist, proposed to
understand morality by imagining what humans’ interactions would be like
in what is called the state of nature: that is, circumstances in which
there is no society, with its laws and morality, to constrain humans’
behavior; circumstances in which people interact simply with a view to
advancing their own interests, heedless of legal and moral constraints. To
understand what interactions in these circumstances would be like, we have
to consider several factors:
- equality of need: we all need essentially the same things, and need
them essentially in the same degree
- scarcity: the things we need aren’t very plentiful
- equality of power: we are all roughly equally powerful; none can
dominate his or her fellows
- limited altruism: people aren’t very concerned for others (beyond a
narrow circle of concern)
- Such circumstances, Hobbes claimed, would be a “constant state of war,
of one with all.” Human life would be “solitary, poor, nasty, brutish, and
short.” (Some people think, though, that Hobbes had an overly pessimistic
view of human psychology: that people in the state of nature wouldn’t
behave as badly as Hobbes claimed they would.)
- To escape these circumstances, Hobbes claimed, the only rational thing
for people to do is to form a social contract: a contract, to which each
person is a party, establishing a government and laws, all for their
common benefit.
- Hobbes had a moral theory that closely paralleled this political
theory. For Hobbes, not just government, but also morality, was to have
(1) its purpose explained, and (2) its content determined, by the idea of
a social contract. On this approach to morality, morality is not a matter
of doing God’s will or following abstract rules; rather, it has the
following very pragmatic content: it just consists of those rules that
rational people would accept, for their mutual benefit, on the condition
that others follow those rules as well.
- section 11.2: “The Prisoner’s Dilemma”
- Hobbes formulates morality in terms of a social contract by analogy
with the political necessity of a social contract. But there is another
(albeit related) motivation for a social-contractarian approach to
morality: a puzzle known as the prisoner’s dilemma.
- prisoner’s dilemma game
- The specific story that gives this concept its name is explained by
Rachels, and in many other books and articles. But the puzzle is not just
about criminals trying to minimize their jail time. It is characterized by
the following features, which occur quite commonly. (Note that these
features are not exactly the ones that Rachels states on p. 150. But these
are not in conflict with what Rachels says; they differ only in emphasis,
drawing attention to things that Rachels leaves implicit.)
- Many people face a certain decision, which each person has to make
independently, and each person has two options—a self-interested option
and an altruistic option.
- Regardless of what others do, each person is better off choosing the
self-interested option than choosing the altruistic option.
- Each person is better off if everyone chooses the altruistic option
(even including himself or herself) than if everyone chooses the
self-interested option. That is, everyone’s pursuit of his or own
self-interest results in everyone’s interest being thwarted.
- Here are some real-life situations in which these features occur
(real-life “prisoner’s dilemmas,” some from Derek Parfit):
- commuting: Each gets to and from work faster by driving than by
taking the bus; but each would get to and from work faster if all (even
including himself) took the bus rather than driving.
- polluting: Each benefits more from not buying a pollution-control
device for her car than from the decrease on overall pollution that
would result; but each would benefit more if all (even including
herself) buy such a device than if all don’t.
- fighting: Each soldier in a group will be safer if he turns and runs
instead of standing and shooting; but each would be safer if all (even
including himself) stand and fight than if all turn and run.
- fishing: Each will make more money if he catches as much as he can
instead of observing limits; but each would make more money if all (even
including himself) observe limits than if all catch as much as they can
(reducing the fish population below its minimum sustainable level).
- studying for a test that will be graded on a pre-set curve: Students
can study a lot or a little. If everyone studies a lot (trying to beat
his or her peers), then everyone’s extra efforts will cancel each other
out and everyone will get the same grades as it they had studied only a
little, and everyone will have done lots of extra work. It would be
better for all (same grades, no work) if all don’t study at all. (This
example assumes, of course, that they’re studying solely for the sake of
the grade.)
- Here is how the prisoner’s dilemma leads to the social-contract
conception of morality. To some extent, we all get along just fine if
everyone pursues his or her own interests. But to a considerable
extent—especially in (real or, more often, metaphorical) prisoner’s
dilemma situations—the unfettered pursuit of self-interest is worse, in
terms of everyone’s self-interest, than the observance of certain limits.
The point of morality is to set these limits (so say contractarians);
thus, morality consists of those rules that rational individuals would
agree to abide by (assuming others do as well), for their mutual benefit.
In effect, the prisoner’s dilemma represents a problem for which
morality—as contractarians conceive of it—provides a solution.
- section 11.3: “Some Advantages of the Social-Contract Theory of
Morals”
- It provides a fairly straightforward and unmysterious account of what
the rules of morality are: the rules of morality are just those rules that
are necessary for peaceful and cooperative living. If a rule does not
contribute to this, it is not really part of morality.
- It provides a pretty clear account of why one ought to be moral: one
ought to be moral because the rules of morality are to everyone’s benefit,
which means they are to one’s own benefit as well as others’.
- It is not overly demanding: it implies that your obligation to follow
the rules ceases once others stop following them (and you’d be a sucker to
continue to follow them), or once following them becomes more costly to
you than being “outside” of morality altogether would be.
- It seems to propose a happy compromise between objectivist approaches
to morality (which leave one wondering where the objective content of
morality really comes from) and subjectivist approaches to morality (which
leave one wondering whether there isn’t something deeper in morality than
just each person’s feelings and preferences).
- section 11.4: “The Problem of Civil Disobedience”
- It is often thought that what justifies civil disobedience (when it is
justified) is that it is the last resort for achieving some beneficial
reform: although unlawful activity is itself bad, the ends justify the
means. This is essentially a utilitarian justification.
- Social-contract theory provides a different justification, which many
people find preferable. It says that civil disobedience is justified (when
it is justified) by the fact that those engaged in it are not bound by
society’s “contract” because they are denied their fair share of the
benefits that the “contract” and society are supposed to provide. Only
when they receive those benefits are they then also bound by the laws that
make them possible.
- section 11.5: “Difficulties for the Theory”
- The theory is often associated with the historical idea of a social
contract, which (even in the case of, say, the U.S., with its
constitutional convention) is largely or completely a fiction. But this
objection is not very serious, since the theory can be reformulated in
terms of the idea of an implicit contract—one to which people are bound
just by participating in society, regardless of whether they’ve ever
actually signed or otherwise explicitly consented to anything.
- The theory implies that we have no obligation towards beings with whom
we have no need to cooperate, or possibility of cooperating, such as
animals and mentally and physically disabled humans. This limitation on
who matters, morally, is viewed by many to be a counter-intuitive and
objectionable feature of the contractarian approach.