I'm writing this in a state of sleep-deprivation and, to some extent, relief; after a week of trying -- stupidly -- to finish up a paper before the APS meeting (in Dallas next week), I have had to resort to plan B/damage control: pad talks and be opaque enough (hopefully!) to avoid getting scooped.
Note that
Alan has a new blog (linked, as one might expect,
on the right.) In
this post he touches on a longstanding disagreement between us on "reasonableness." To be a little reductive, Alan is (like most of my friends) a firm believer in self-improvement, intuition pumps, Science, and the like; I am a nihilistic slob, with considerable sympathy for irrationalism. Part of this is, no doubt, due to differences in temperament (this is the only way I can explain the fact that I'm not a vegetarian), but differences in intellectual history also have something to do with it.
I should distinguish between contemplative and instrumental rationality: the former is about getting facts right, not believing false arguments, etc.; the latter is about getting what one wants, whatever that might be. (The former is a limiting case of the latter.) Given a list of desires and beliefs, instrumental rationality tells you what actions are (in some pretty obviously meaningful sense) "rationally binding." In certain very specific contexts (e.g., a prisoner trying to escape), what one wants is clear, and instrumental rationality is a useful tool.
Perhaps some cases in the historical core material of economics -- purely profit-maximizing regimes of endeavor -- resemble this; however, whether any of it applies to everyday life is much less clear, as it is not evident that people have fixed desires in any meaningful sense. (I wholeheartedly agree with Andrew Gelman's aphorism that "
the utility function is the epicycle of social science," which I probably consider to be more broadly true than Gelman does.) In order to adapt instrumental rationality to everyday life, one is forced to do a sort of three-step: (a) assume that a utility function exists, (b) use a combination of survey data, "revealed preference," behavior, and intuition-pumping to figure out what the utility function says, (c) accuse people of being irrational when the "best" utility function doesn't do a good job of predicting their behavior.
Bertrand Russell remarks,
re Locke's ethics:
Almost all philosophers, in their ethical systems, first lay down a false doctrine [in context: a false descriptive theory of human motivation], and then argue that wickedness consists in acting in a manner that proves it false, which would be impossible if the doctrine were true. Of this pattern Locke affords an example.
To the extent that (b) is about
empirics (revealed preference or survey data), the three-step above clearly falls into this pattern, with (a) as the false doctrine. To the extent that it relies on intuition pumps, these are only as reliable as one's prejudices. Extreme cases can be dismissed on the grounds that they're outside the expected regime of validity of one's moral systems. (Derek Parfit attempts unpersuasively to counter this argument somewhere, I forget his point.) On the other hand, there is no particular reason to expect that
true observations about human nature of the kind that come out of neuroscience etc. (which are bound to be statistical) are likely to imply, or "go with," a moral system. In order to make this sort of inference one must bring in a principle of the form "what is 'normal' is 'healthy' and 'good'" -- or some equivalent kind of naturalistic inference even in the individual non-collective sense -- that I find (even in its mild forms like "it cannot be morally binding to be an outlier") both repugnant and not a priori true.
In short, I believe that this line of thinking is unlikely to get anywhere specific, and the standard attempts to work around the skeptical arguments remind me of the epic exercise in flailing that
is the Russell/Whitehead
Principia Mathematica. This is, perhaps, where differences in training (not to mention the degree of one's interest in self-improvement) come in: Alan would presumably say that one ought to learn philosophy to (a) at least understand the skeptical arguments (self-improvement) and (b) find a systematic framework, however imperfect, to address these questions in. As regards (a) there are lots of causes in physics and math that are
understood to be lost causes. I understand many of them very hazily -- it is useful to know enough to realize when a line of inquiry you once thought promising turns out to be equivalent to a lost cause -- but do not have the time to buttress my skepticism. And I think (b) relies on an assumption about conscientiousness being intrinsically worthwhile -- esp. w.r.t. important matters -- that is quite unnatural for a physicist; one does not waste time on problems, no matter how important, that are generally believed to be intractable, unless one starts out with a specific reason to believe the consensus is wrong. If the most elaborate reflection doesn't produce policy that's
demonstrably better than dominance reasoning plus coin-flipping -- and the upshot of the skeptical arguments, I think, is that it cannot -- one shouldn't waste time on it. Ideas do not get A's for effort.
---
An especially important example -- pace Kuhn, a paradigmatic one -- of a lost cause is Aristotelian physics. Steven Weinberg (again via Alan)
wrote of "the shift (which actually took many centuries) from Aristotle's attempt to give systematic qualitative descriptions of everything in nature to Newton's quantitative explanations of carefully selected phenomena." The primary lesson of this example is that the best way to get a handle on some problem is not always -- or even
generally -- to approach it head-on. One is better off explaining
something thoroughly and working outwards. The flip side is that one might end up with meteorology, in which the questions "of interest" are intractable and the explicable phenomena (
icicle bending!) aren't of much interest. (If I were an economist I would be working on Zipf's law.) Almost all phenomena -- and a fortiori almost all relevant ones -- are in practice impossible to understand from first principles.
There's much more to be said about all of this but I'll restrict myself to a brief note on "ideology" as the term is used in physics. An "ideology" is a widely believed but vague and/or unprovable rule of thumb that can be applied to prove various specific results. It is, in general, a rule about what questions to ask and what kinds of answers to look for. The "
renormalization group" idea in physics is an ideology that plays a role that's roughly like that played by evolution in biology: one cannot reduce it to a precise, true, non-vacuous statement, but it guides the field. Ideologies are what Weinberg refers to as the "soft" parts of theories:
There is a "hard" part of modern physical theories ("hard" meaning not difficult, but durable, like bones in paleontology or potsherds in archeology) that usually consists of the equations themselves, together with some understandings about what the symbols mean operationally and about the sorts of phenomena to which they apply. Then there is a "soft" part; it is the vision of reality that we use to explain to ourselves why the equations work. The soft part does change; we no longer believe in Maxwell's ether, and we know that there is more to nature than Newton's particles and forces. ... But after our theories reach their mature forms, their hard parts represent permanent accomplishments.
This distinction exists to some degree outside particle physics -- a great deal has been learned about the lineages of various species, etc., even if the ideology that led to these discoveries turns out to be false. But it's worthwhile to distinguish between the predictions of a
theory -- i.e., predictions that come out of the "hard part" -- and those of an ideology, which come from the soft part. The latter cannot be disproved in any straightforward way -- they are just patterns we impose on selected agglomerations of fact -- and change, as
often as not, because the community becomes interested in other problems where the ideology is less useful. (For instance, an ideology in condensed matter physics is -- very roughly -- that any graph of the properties of a large system that exhibits jumps is a sign that there's something "topological" about the system. This ideology has led to a number of fascinating discoveries about the way electrons move in metals; however, it is always possible to have jumps for all sorts of non-topological reasons. A bit of particle physics ideology that was behind Weinberg's Nobel Prize-winning work was the puzzling-to-an-outsider belief that all bosons [particles obeying Bose-Einstein statistics] are gauge bosons.)
As I understand them, both "incentives" and adaptationism are ideologies of this kind. I think ideologies are great as long as one's ultimate objective is to solve specific problems. One should perhaps be more careful, though, when trying to justify specific research programs on the grounds that they might "prove" or "disprove" the ideology; it's (in T.S. Eliot's phrase) like trying to dispel a fog with hand grenades. I.e., a lost cause.