A Response to Sam Harris’ health-morality analogy

Sam Harris speaking in 2010
Sam Harris speaking in 2010


Sam Harris, one of the four “horsemen” of New Atheism, published a book delineating his position on moral realism (whether objective moral values exist and how can we know them). Its central claim is that being moral entails trying to maximize the aggregate “well-being” of sentient beings. So, claims about the morality of actions reduce to statements about how those actions affect the mental states of creatures, and thus can be verified scientifically.

His work has been reviewed and critiqued quite well by academics across fields including philosophers Russell Blackford, Massimo PigliucciThomas Nagel and physicist Sean Carroll. The general opinion seems to be that he unsuccessfully tries to derive an ought from an is, and defines “science” too broadly in order to justify an attractive subtitle for his book (“How Science can Determine Human Values.”) I have already written about naturalists’ attempts to ground morality, and thus will not attempt to point out all the flaws in Harris’ line-of-reasoning. Instead, I would like to focus on a novel analogy he provides between the science of medicine and an objective system of morality.

Harris was rightly criticized by several reviewers for basing his allegedly scientific system of morality on a premise (“we should value well-being of conscious creatures”) that isn’t scientifically justifiable. Even though a system of “prescriptive” morality can be formed with the help of science once we accept this premise, he seemed to provide no basis for justifying the premise itself apart from labeling those who don’t affirm it as absurd and irrational. He chose to respond to such criticism in the following manner:

It seems to me that there are three, distinct challenges put forward thus far:

1. There is no scientific basis to say that we should value well-being, our own or anyone else’s. (The Value Problem)

2. Hence, if someone does not care about well-being, or cares only about his own and not about the well-being of others, there is no way to argue that he is wrong from the point of view of science. (The Persuasion Problem)

3. Even if we did agree to grant “well-being” primacy in any discussion of morality, it is difficult or impossible to define it with rigor. It is, therefore, impossible to measure well-being scientifically. Thus, there can be no science of morality. (The Measurement Problem)

I believe all of these challenges are the product of philosophical confusion. The simplest way to see this is by analogy to medicine and the mysterious quantity we call “health.” Let’s swap “morality” for “medicine” and “well-being” for “health” and see how things look:

1. There is no scientific basis to say that we should value health, our own or anyone else’s. (The Value Problem)

2. Hence, if someone does not care about health, or cares only about his own and not about the health of others, there is no way to argue that he is wrong from the point of view of science. (The Persuasion Problem)

3. Even if we did agree to grant “health” primacy in any discussion of medicine, it is difficult or impossible to define it with rigor. It is, therefore, impossible to measure health scientifically. Thus, there can be no science of medicine. (The Measurement Problem)

I think his response to the third point is good enough. His main point, however, is that since we have no qualms with there being a science of medicine focused on helping people with certain widely shared values (preference for longevity, being free from diseases etc.), we shouldn’t have any with a “science of morality” based on universal values either. There is a gaping flaw in this bit of reasoning. Yes, one can perfectly well develop a budding “science of morality” in this fashion. But, that system won’t be binding, and that would make it totally unworthy of being called a system of morality.† The fact that most people share some basic values, and thus can form a system of medicine based on them is just a matter of convenience, nothing else, much like soccer fans agreeing to form FIFA and supporting the game. No one is obligated (and shouldn’t be) to accept the recommendations of that system, if he/she doesn’t accept the values that undergird it. If you don’t prefer longevity, you can ignore suggestions about how to live longer. In fact, you can and do make your own value judgements about your health. Weighing the side-effects of a pain reliever against the short-term relief is your decision. Of course, we know that people tend to agree, by and large, on what they value about health and that allows doctors to make general recommendations based on universal albeit subjective values. That’s perfectly fine for a system of medicine. But not for one of morality because it’s not enough for its foundational premises to be universal. They need to be objectively true.†† Carroll expresses this quite well in his review of Harris’ book:

…Can we not even imagine people with fundamentally incompatible views of the good?  (I think I can.)  And if we can, what is the reason for the cosmic accident that we all happen to agree?  And if that happy cosmic accident exists, it’s still merely an empirical fact; by itself, the existence of universal agreement on what is good doesn’t necessarily imply that it is good.  We could all be mistaken, after all.

Our system of medicine makes claims of the sorts, “If you value living longer, don’t smoke.” It does not say that you ought to value living longer, but it tells those who do what to do to achieve that end. On the contrary, morality is not about making “ought” statements contingent on a person’s wishes or values. Rather, it’s about claiming what people ought to value regardless of what they already happen to value. For every statement like “if you value seeing other people happy, donate to a charity” there can be an analogous statement “if you value killing people, purchase a grenade and drop it in a mall.” Any meaningful system of morality needs to tell us why valuing other people’s happiness is objectively better (or worse) than valuing killing people, instead of just making recommendations about how to fulfill our already held values to a maximum. “Oughts” of the sort, “if you value X, you ought to do Y” simply aren’t valuable in answering questions about morality.

One of the main challenges of metaethics and moral philosophy is about trying to find out what is the proper conception of “good.” Once that’s established, finding out ways to maximize that “good” is, I dare say, comparatively trivial. If Harris really wants to make a case for moral realism, for why some people’s conception of morality is wrong, he needs to tell us why his conception is correct. It is not enough for his “science” of morality to prescribe how to maximize aggregate well-being. It needs to tell us why that is the proper goal of morality.

† Consider this: two persons build two different sciences of morality: science of morality A whose aim is to maximize aggregate well-being of sentient creatures, and science of morality B whose aim is to maximize some other variable X, let’s say a particular person’s well-being (it’s not hard to think of many such variables). The big question still remains: prescriptions of which science A or B should you follow?

†† Many people who defend Sam’s analogy assert “just like there can be objective claims about health, there can be objective claims about morality.” There is a genuine confusion underlying it. The term health is analogous to well-being, rather than “morality.” The analogous (and correct) assertion is “there can be objective claims about well-being of sentient creatures,” which is irrelevant to a discussion about morality because a claim about well-being isn’t a moral claim per se.

The photo of Sam Harris belongs to Steve Jurvetson and is used under CC BY 2.0.


4 thoughts on “A Response to Sam Harris’ health-morality analogy

  1. Harris’ response (if he were speaking entirely frankly and clearly) would be: The very idea of a ‘binding’ morality is either a confusion or a fiction. If my use of the word ‘morality’ makes it easier for us to sidestep talking in terms of it, all the better. Agrippa’s trilemma shows that (as a matter of logic) we can’t have a ‘binding’ set of justifications for anything, in any field, ever. Truths of mathematics, science, logic, and daily life are all dependent on assumptions that cannot be non-circularly justified. There is no reason to expect morality to be any different, and there never has been any such reason.

    So if you demand that morality be different in this respect, then, OK; you define ‘morality’ in such a way that all moral statements are false (or meaningless). But that seems linguistically cumbersome, when we have all these handy words like ‘good’ and ‘ought’ that mirror the structure of some very important real-world properties we urgently need to be talking about in order to evaluate and promote policy decisions. So I humbly dissent from that use of the word ‘morality’.

    For practical purposes, the important question is not ‘Is there some mysterious platonic Goodness that makes us Objectively Justified in adhering to claim X?’ Rather, the important question is ‘What real-world impact will X have on the things I (or my community) care most about?’ If philosophers want to make the world a better place, they need to worry less about Objective Justification and more about the cash value of different kinds of justification in the goals and experiences we actually have.


  2. Well, then why aren’t people who have a different set of moral premises (like me) than Harris entitled to their own “sciences” of morality?

    As a practical matter, I (and many others) do not think that maximization of aggregate well-being is the proper goal of morality (i.e., our own personal morality) and thus would not want to cede ground to others by letting them call their personal morality as the objective or scientific one. (People disagree on what’s moral, even when they agree on the “is” or all the facts.)

    Harris also wants to say that some moralities held by people are wrong. If he wants to make such a statement, he needs to show the one he proposes is the “right” one. Otherwise, what would it mean to say others’ moralities “wrong?”

    It’s okay as far as Harris calls the science he develops (or proposes to develop) the science of maximizing utility or well-being. He can simply convey the results of that science as “Y’s act X is causing a major decrease in aggregate well-being.” Words like “ought” (it seems to me) should be reserved for people expressing beliefs borne out of their personal moralities.

    What I think Harris is trying to do is to trick most of the people who do not grasp the nuance. Many of his readers (not you, of course) seem to think that his use of the word “science” and “reason” impute objective justification (which you deny) to his morality, and are unable to see why this doesn’t work as it does for “health.”

    Now, I acknowledge that being skeptical about morality raises some questions about other kinds of knowledge. And, I think that skepticism is warranted. I do not pretend that a person is bound by rationality to reject solipsism or embrace induction. What we think we know ultimately is based on properly basic beliefs for which we have no non-circular justification, as you note. So, I accept that and do not fetishize “facts” as if we could know anything without first making some assumptions (which others are not bound to make.) And, as a result, I’m more receptive towards Plantinga’s Reformed Epistemology than most other people. (His proposal is similar in a sense to yours about morality: almost nothing we know can be justified without some assumptions. What if we add one more assumption (which has no non-circular justification) that God exists? Make it a properly basic belief?)

    What helps in matters like above is that most people are born with an inclination to believe in induction (as opposed to counter-induction) and reject solipsism etc. But, that’s not the case with morality. Although, a vast majority of people do think that there are “objective” rights and wrongs, at least in some cases, they do not happen to agree on what’s right and what’s wrong (in most cases.) People have different sets of moral premises. The moralities formed with such different premises produce highly varying sets of recommendations on what one “ought” to do, what’s “right” etc. Even those who happen to agree on maximization of well-being as the one and only goal of morality differ on their definitions of well-being, and thus their moralities end up differing *a lot.* (so much, in certain given conditions, an act X must be done by a person Y according to morality A, but must not be done by the same person according to morality B.)

    Since there is so much disagreement (in practice), it’s better to not form a science of morality on one person’s choice of moral premises. It’s better to be more clear and say “this is what you can do for maximizing well-being” and let people decide for themselves.


  3. Pingback: Sam Harris, the fact-value distinction, and the problem with a science of morality | SelfAwarePatterns

  4. Pingback: The Leather Library / The Fatal Flaw in The Moral Landscape - The Leather Library

Feel free to leave a reply!

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s