If you have the time (a lot of it -- should be evident from posting frequency that I'm procrastinating with a vengeance) you might be interested in this bloggingheads discussion between Don Loeb and Peter Railton about moral realism. (Watch Loeb's eyes while Railton's speaking.) I had read some of Railton's stuff at college, thanks to Alan, but it was all very meta-ethical and nonconstructive, and seemed liable to dismissal on "show me an absolute moral truth and I'll show you a contradiction" grounds. The video makes it clearer what Railton's truths are: stuff like "you should keep your promises." (See also: this Boston Review piece.) The "evidence," such as it is, comes from history and psychology. I find his position pretty unappealing, because (1) he disqualifies all forms of local feeling (patriotism, etc.) as not moral, (2) intuitions even about cheating are not quite the same everywhere -- in Tanzania, where I grew up, it's generally considered legit to cheat an out group, (3) it is not true outside of Ann Arbor that everyone believes in tolerance. Now I happen to agree with Railton's sentiments, but I'm bemused at any historical analysis implying that people have always held universalist values.
Defenses of moral truth often come down -- implicitly -- to the claim that the fact/value distinction is fuzzy, and denying moral truths is like solipsism. The difference is that we all agree, fairly strongly, on what the natural world is like, and philosophical justification is a formality; however, we do not generally agree on morality, and I have no consistent intuition about the existence of moral truths.
There's an interesting bit (ca. 30:00 to 40:00) on cognitivism: cognitivists claim that moral assertions are statements about something, whereas non-cognitivists claim that they are assertions of attitude -- e.g. yay red sox. (The BR article has a nice example of the semantic dangers here.) Non-cognitivists are obligated to explain how logic and consistency apply to moral statements, if they do. The if is something I've wondered about in the past, and it still puzzles me. There are some trivial cases of "moral reasoning" -- Killing pinnipeds is bad. A walrus is a pinniped. Therefore, killing walruses is bad. -- in which the premises do seem to imply the conclusions. However, I dunno about the law of the excluded middle: either one ought to kill walruses or one ought not to kill walruses. Seems to me it could plausibly be both, or neither. (The trouble here is, of course, that practical reasoning is a matter of weighing evils against each other, and "ought" is too blunt a term to be any use.)
PS Hilary Putnam argued, in "Mathematics without Foundations," that object talk can quite generally be re-expressed as modal talk and vice versa. I imagine this applies to the debate over cognitivism, and potentially settles it. Now where's my prize.
1 comment:
This is off-topic and about an enterprise that you and most of your readership (pardon the redundancy) probably regard as futile: answering the question "what does it mean for something to be good for someone?" Railton's answer (at least at one point) is essentially "what he would want himself to want, or to pursue, were he to contemplate his present situation from a standpoint fully and vividly informed about himself and his circumstances, and entirely free of cognitive error or lapses of instrumental rationality." Here are some interesting objections, culled from a (naturally lame) response paper I wrote in college:
Connie Rosati's first objection to "ideal advisor" accounts of the good is that they lack normative force. This is so because one's ideal advisor is so distinct from oneself that she may as well be considered a different person. Additionally, "In order for us to be sure that we can regard the fully informed individual as authoritative, we must have a conception of what it would be for an individual's motivational system to change for the better, and thereby a more substantive conception of an ideal advisor – one that incorporates an ideal of the person." Rosati's second objection is that the concept of a fully informed person is incoherent. She writes, "But what traits might enable a person to appreciate side by side what it is like to live lives of possible selves whose traits are so at odds? Is the fully informed person a hybrid, for instance, someone who is at once perceptive and sensitive, obtuse and controlling?"
I find these arguments and Rosati's preemptions of ideal advisor responses quite persuasive. Nevertheless, my intuitions about prudential value incline me towards some sort of ideal advisor account. Maybe I should take after Rosati and treat this account as a mere "regulative ideal." But this smacks of a cop-out. Conceptually, there seems to be a definitive answer, at a given point in time, to the question "what would be the most personally utile way for me to live the rest of my life?" Rosati shows that no hypothetical person with a fixed perspective, however broad, could provide this answer. However, comparing the utility of one’s "possible lives" still seems to be the way to get at one's good. Perhaps, then, an ideal advisor can be thought of as someone who can somehow compare (say, based on "interviews") the utilities of each possible life and then recommend the best one.
If the above suggestion merely sidesteps Rosati's objections, then maybe I can challenge her directly on the point that "[i]f a person must have certain traits in order to experience something in a certain way, it seems she must also have those traits…in order to remember what it is like to experience that thing in that way." It seems plausible, for instance, that someone can remember being an impulsive child despite being a cooler-headed adult; one can understand many dispositions but value some more than others.
Post a Comment