The naturalistic fallacy consists of deductions from the way things are to the way things ought to be. Most intelligent Pinkerites know this, and have a second-string argument lined up: they argue that since ought implies can, cannot implies ought not, and therefore the negative findings of psychologists have legitimate deductive moral consequences. (Dennett makes this argument somewhere; Will Wilkinson makes it online.) This seems to me a serious misreading of "ought implies can." In general, when social scientists find "facts" about human nature, these tend to be probabilistic, and tend to have large numbers of counterexamples. The following form of argument -- people are like X, X'es can't Y, therefore people can't Y, therefore they can't be expected to Y -- usually has counterexamples; a 10% rate of counterexamples wouldn't stop an academic psychologist, let alone David Brooks, from making the first claim. Therefore, "ought implies can" entails "people cannot be obligated to be outliers," or "the average life is basically ethical." But this is not by any means a universal axiom of moral systems; it's a fairly strong assertion, which is inconsistent with Plato's cave, with Augustine and Calvin, and presumably with Nietzsche.
"Ought implies can," for sociobiological "can," is a conversion rule of the same general type as utilitarianism. It isn't a fatuous rule but it doesn't, like, follow from first principles.
26 comments:
Okay, that does sound pretty dumb. But, on the other hand, many people believe that it's (relatively) excusable to fail at a moral duty it's (relatively) hard to perform. Doesn't that make roughly the "ought implies can" point for probabilistic observations about human nature?
It's the same point in practice, I think, and has the same limitation. The problem is that whether failure is excusable depends on the importance as well as the difficulty of the duty. A predisposition towards alcohol is a bad excuse for abandoning your family, temperamental instability is a weak excuse for murder, etc. There's a widespread but incoherent belief that common failings are more excusable than rare ones. In any case, there's a hidden premise to the effect that most people lead morally acceptable lives, which disagrees with a lot of philosophers and cynics, and hence isn't a universally acceptable axiom.
That hidden premise isn't there if "people cannot be obligated to be outliers" (the presumably strawman interpretation of the Pinkerite conclusion) is changed to "people cannot be obligated to outliers when that would be asking too much."
It doesn't make sense to say that most people are evil unless you believe that most people (sufficiently) fail to behave as we can reasonably expect them to. This belief is of course not inconsistent with the second statement above.
Accordingly, when a psychological study finds that a supermajority of people fail to do the right thing in a given situation, the proper conclusion isn't that their failure must be reasonably acceptable; it's that their failure is one worth addressing. Only if a supermajority of people continue to fail after an adequate response should we be inclined to deem their failure reasonably acceptable. For example, the fact that tons of people illegally download MP3s is not indicative of "cannot" because the law is poorly enforced and we have no reason to believe the desire not to pay is overwhelming. But if trapped spelunkers keep cannibalizing despite invariable murder convictions, then maybe we should reconsider condemning them.
Maybe I've said something stupid in my hasty stupor, so let me just say this: there is an unassailable version of "cannot implies ought not" that follows from "blame implies should have implies could have at not too great a cost." If this is a "fairly strong assertion" it's only because a lot of philosophers and cynics are incoherent.
I disagree about the trapped spelunkers, which seem central to your argument. I don't think temperament can meaningfully be considered an extenuating circumstance in rare cases (is "he started lying before he could speak!" a valid defense?) and don't see why we should treat it like one in common cases. To meet minimum moral criteria, whatever these are, is harder for some people than for others; the claim that "oughts" must be such that 50% or 66% of people have a relatively easy time meeting them IS a fairly strong assertion.
I just read the Wilkinson post, and it's even stupider than you make it sound. When you said he infers "cannot implies ought not" from "ought implies can", I charitably assumed that by "ought not" he meant something like "not obligatory" rather than "impermissible". But no:
"And because ‘ought’ implies ‘can’, there is a straightforward link from the descriptive to the normative. Because a theory of human nature can tell us a lot about what we can’t do, and what won’t work, we can learn a lot about what we shouldn’t do."
He evidently thinks "cannot implies ought not" should be read "If we can't do something, then we shouldn't do it." Of course, this doesn't follow from "ought implies can".
To belabor my pedantic assholery, here's a proof a la Wilkinson that humans are omnipotent: (1) Ought implies can. For the same sorts of reasons, (2) ought not implies can (if something is morally impermissible to me, it must be possible for me to do it). But, (3) cannot implies ought not (contraposing 1). So, from 2 and 3, (4) cannot implies can. Therefore, (5) can.
Your argument is brilliant. To be fair to Wilkinson (who's kind of an idiot), I think he was trying to say something like "we can't do X, therefore we shouldn't try to do X because it's pointless," or something like that; however, the rest of his post is compatible with your interpretation.
On reflection, maybe I am being uncharitable. Maybe when Wilkinson says we can learn about what we shouldn't do from what we can't do, he means that from "we can't do x" we can learn that we shouldn't punish people for not doing x. But Wilkinson is a libertarian, so I'm guessing the more ridiculous interpretation of his claim is the correct one.
Can!
Slightly more relevant comment:
I think you're right if we're talking about the application of "ought implies can" to populations. "Cannot" claims made at the population level (situationist stuff, for example) are statistical, and a certain number of counterexamples are expected. The contrapositive of "ought implies can" assumes a universal exceptionless "cannot", so it does not really engage with the psychological findings.
But "ought implies can" isn't just a population-level moral principle. Helen Keller has no obligation to drive her sick mom to the hospital, because she can't. This is an application of the principle to individual capacities, and I think it's just as much of a truism as the population-level application.
And here I think the negative findings of psychologists can have moral consequences. The population-level claim: "Psychopaths cannot defer gratification," combined with the individual diagnosis: "Alan is a psychopath," plausibly entails that Alan is not obligated to defer gratification.
Of course, the psychological claims I'm talking about here are also statistical, but when you're dealing with individuals, the probabilities involved are best interpreted as credences rather than objective frequencies. So the claim about Alan would amount to something like "I'm x% confident that Alan cannot defer gratification." This just means that the moral consequences we infer also have confidence levels attached to them, and there's nothing unusual or pernicious about that.
Incidentally, I think the Pinkerites need not appeal to "ought implies can" at all, because the naturalistic fallacy is horseshit. Sure, we couldn't derive an "ought" from an "is" if the only inferential resources available to us were the rules of formal logic. But we make material (non-logical) inferences all the time, and many of them are as good as logical inference. The inference from "x is green" to "x is colored", for instance, is as psychologically immediate and as secure as modus ponens. And I think certain inferences from natural facts to moral claims are in this category.
"You can't derive an 'ought' from an 'is'" comes from a guy who fetishized deductive logic to the extent that he thought inductive inference was unjustified because it couldn't be adequately re-expressed deductively. I don't think we need to follow him in that insane tendency.
1. I don't know how far I agree re Alan and deferred gratification. It seems odd to control for temperament when making ethical judgments. The Greeks, to some extent, also thought it was silly to control for ability, and might have disagreed about Helen Keller.
1'. In any case, I think that to apply ought-can to sociobiology you need population-level inferences.
2. What is-ought inferences do you consider immediate? I take it that the existence of is-ought inferences immediately implies that of moral truths: I don't know of any moral truths and would be quite surprised if any existed.
Here's an is-ought inference that is plausibly immediate. My friend is torturing a baby in front of me for his own amusement and I know he will stop if I tell him to. Given this factual information, I immediately infer that I ought to tell him to stop.
Of course, if you don't find this example compelling, I probably couldn't convince you otherwise. But I'm not claiming that the psychological immediacy of this particular inference is a universal human trait (although I'd guess it's pretty damn close).
Human inductive biases are probably universal (most of them, at least), but they're also probably a contingent product of our evolutionary history to some extent. We could have ended up with biases according to which grue rather than green is the projectible predicate (I think it's an underappreciated fact that our inductive preference for green over grue cannot be explained as an adaptation). If I were confronted with an alien species with grue-type inductive biases, I would not be able to convince them that they are wrong. But it is obvious to me that they are wrong. It's just how I'm built. I'm a machine that cannot avoid making certain inductive inferences rather than others. I feel the same way about certain is-ought inferences. The immediacy and solidity of these inferences is obvious to me, and the existence of agents who thought otherwise would not dent this obviousness.
Would the existence of non-alien cultures with the opposite inductive bias do much to spoil that inference? I imagine there are cases in which, e.g. the baby belongs to a hated out-group or something, and torturing it is considered morally neutral or praiseworthy. There obviously isn't (at least wasn't always) a universal moral compass about e.g. some pretty crazy forms of human sacrifice, and I don't see that your case is all that different.
Let me make another attempt at the point I was trying to make in the last post. I take it that a major reason for rejecting moral realism is that moral truths, if they existed, would be epistemically inaccessible. I say this is false: moral truths follow immediately from natural facts. The moral skeptic says this is implausible because: (a) There can be moral disagreement among rational agents who agree on the natural facts, and (b) I cannot deduce a moral fact from a natural fact without implicitly assuming another moral fact.
I say that if reasons like (a) and (b) were conclusive, we should be skeptics about induction. Deductively rational agents who agree on the natural facts could disagree on the consequences of inductive inference, and we've known since Hume that inductive consequences can't be deduced without assuming some indemonstrable claim about the uniformity of nature (and since Hume we've learned that we assume not just that nature is uniform, but also what it would be for nature to be uniform).
I'm guessing most moral skeptics would not also be inductive skeptics, but I'm not sure what sort of reasons they could have to reject the former but not the latter except perhaps the purely psychological fact that no is-ought inference seems as immediate to them as inductive inferences.
Yeah, I don't even think the existence of an inductively aberrant human tribe would shake my faith in inductive inference. I don't see why it would be relevant that they were human rather than alien.
An addendum to my last post: I guess I could see why it would matter that another human group didn't share my inductive biases if I were trying to give a natural selection based justification for those biases. But, like I said earlier, natural selection can filter out some rival biases, but their remain an infinite number of rivals that it can't filter out. So it doesn't matter (for justification) if my bias is innate or cultural.
"...except perhaps the purely psychological fact that no is-ought inference seems as immediate to them as inductive inferences."
Sort of. It's partly historical. I'd call it the limits-of-empathy problem. By my moral lights, a lot of cultures today and most cultures in the far past were seriously aberrant in their moral beliefs. This makes me skeptical of the universality of moral truths. No problem of comparable severity applies to induction.
To put this another way: my confidence in the existence of other people and the structure of their minds depends heavily on empathy. It's clear from the literary record that Homer and Chaucer did not have grue-bleen preferences. It is equally clear to me that we inhabit a different moral universe from Homer or the rune-makers.
I guess it's possible that we're talking past each other if you're not asserting the "objectivity" of moral truths. But I don't really understand the idea of a moral argument that's valid for one person only, or how that's different from e.g. a poem or a PETA documentary.
I'm not entirely sure what you mean when you say you're skeptical about the universality of moral truths. Here are two possibilities:
(1) You're skeptical that there are any non-trivial normative claims that would have normative force for all (or almost all) humans. I'm prepared to grant that that's false. But I don't think it has much bearing on moral realism for the sorts of reasons I mentioned. I guess maybe I should turn your question back at you: Suppose it turned out that the authors of ancient Chinese texts had grue-bleen preferences. Would that affect the credibility you assigned to your own inductive inferences?
(2) You're skeptical that there are moral truths that are universally applicable whether or not every moral agent feels their normative force. I think there are such moral truths and I don't know why historical considerations should lead to skepticism about this claim.
I am asserting the objectivity of moral claims. It's only incidental that the baby-torturer example involved me and my friend. It could have been you and your friend. Could it have been Homer and his friend? I'm not entirely sure. There's some degree of cognitive dissimilarity at which certain moral claims no longer apply. I don't think the baby-torturer example would work with gorillas. I don't really have any well-defined criteria I can appeal to here, but I'm just going to assert that Homer is not cognitively distant enough to be exempt.
I guess I would be sorely vexed if there were Chinese glue-bleeners. (They _would_ have to be Chinese, wouldn't they.) Would it affect my inductive confidence? Quite possibly. If it turned out that a change of basis like grue led to better physics, I might be tempted to change my inductive preferences (though that would make grue adaptive, which you say it isn't). e.g. I really do think of electrons as delocalized.
The historical record is what it is; it does seem somewhat telling, at least about the accessibility of any putative moral truths, that the extent of moral variation is incredibly greater than that of inductive variation. I don't much like the idea of there being some fairyland of moral truths Out There; if there are universally applicable moral truths (2), there should be universally persuasive moral arguments (1), as is more or less true with induction.
And it also seems relevant that, across a broad swathe of cultures in the West, there's basically complete inductive agreement and minimal, if any, ethical agreement. (I guess your baby-torturing thing might be universal here... But, like, I would have thought before 2002 that there was a universal consensus against all torture, which there isn't.)
1. I'm still having trouble seeing why you think the existence of moral disagreement is troubling. Usually when people point at moral disagreement, it's to indicate that the realist really has no transcendental justification for her moral code over others'. And in the absence of such a justification, she should acknowledge that it would be entirely arbitrary for her to assert that her moral precepts somehow latch on to moral truths "out there". There is nothing about her cognitive/epistemic faculties that would make it more likely that her beliefs track the moral truth (if there is such a truth) any better than an Islamic cleric's beliefs. Is this the sort of argument you have in mind?
Now it seems to me that the crucial premise in this argument is the absence of a transcendental justification (justification that is normative for all rational agents). The existence of disagreement is not an independent reason to reject realism; it just highlights the absence of justification. Sort of like: "Every sort of justification you come up with, I can point out a group of people who wouldn't really be moved by it. So it isn't really a transcendental justification after all." My claim is that if the standard of warrant for our beliefs is this high, we'd have to be skeptics about inductive inference.
Now it's true that inductive biases are more or less universal among humans. But in this case we have other reasons to think that a transcendental justification will not be available. In fact, the situation is worse with inductive bias. In moral philosophy, it may be true that no one has yet been able to articulate a meta-ethical theory that is universally compelling, but the possibility that this will happen in the future has not been ruled out. Ethicists still hold out hope that they will be able to do it. But with inductive inference, we know (thanks to Hume, Wittgenstein, Goodman and AI research) that a justification of this sort is impossible.
Incidentally, although we don't come across too many inductively aberrant agents in the present, I think it's likely (or at least quite possible) that we'll be surrounded by them in the not-too-distant future. There's no reason I can think of to expect that we will be able to duplicate our inductive biases (which are probably the product of all kinds of contingent facts of our evolution and material constitution) when we create artificial intelligences. After all, there's nothing obviously "natural" about our biases beyond the fact that they seem natural to us and they have (somewhat mysteriously) worked so far. No matter how many inputs we train our AIs on, it seems to me that we cannot rule out the possibility (more like near certainty) that there are hitherto untried inputs to which the AI will react in a completely unexpected manner. Despite his general annoyingness, I think Eliezer Yudkowsky writes about things like this quite well. Anyway, even if/when we're surrounded by these learning machines, most of them performing as well as us, I suspect most of us will continue to believe that all emeralds are green.
2. When I said green isn't adaptive, I didn't mean that green-science isn't better than grue-science. Maybe I should have been clearer. Let's say "grue" can be translated "green till 2012, blue after that". Now, for us grue is not a projectible predicate. We think predictions based on assuming the stability of grueness will fail in 2012. The hypothesis "All emeralds are grue" will turn out to be false. So we (or at least I) think green-science will be more successful, although both sciences will be equally successful till 2012.
But our current preference for green over grue as the natural predicate can't be explained in terms of the future success of green-science over grue-science. Evolution can't see into the future. In our present environment (and in all our past environments) green-people and grue-people are, ceteris paribus, equally fit. This is what I meant when I said we cannot offer an adaptive justification for green over grue. Until 2012, I don't think we can offer any sort of transcendental justification.
3. I like that you refer to "Chinese glue-bleeners". But really, that would be more Japanese, no? And it would be "glue-breeners".
Oh, and when I say moral truths are "out there", I hope you don't interpret me as a filthy Platonist. I don't think moral truths occupy some nebulous non-physical realm. They supervene on the structure of the world, just like logical/mathematical truths.
1. Sorry if I was unclear, but yes, I was making the sort of argument you outline, except that I think transcendent justification is unnecessary given that inductive biases are (until 2012) stable and universal enough for my tastes. Ethical biases are not. Obviously I'd have to rethink this if presented with _honestly_ grue-bleen robots, but until then I'm happy enough about induction.
2. Yes, I see your argument. I do wonder, though, if there's a resolution to this puzzle along the lines of the usual one to Schrodinger's cat -- the basis (live, dead) seems arbitrary but is in fact inevitable if you're careful about including the environment. e.g. There's clearly a "best" inductive algorithm for a bacterium, which has rudimentary mental skills; there's no good reason for a flatworm to change this, and so on up to us.
3. Many Chinese have r/l issues. Glue/bleen is necessary b'se glue/breen is good English; it's just breaking the word in a different place. At high school (Swahili also has l/r issues) I had a teacher who said palallelloglam and another that said pararrerrogram.
Post a Comment