Let me remind you briefly of how Godel's proof works. You start with an axiomatic system (i.e., a list of allowed characters -- a language -- and a list of axioms in that language) that includes all the usual operations involving natural numbers. Because of the unique prime factorization property, you can assign each string in the language a unique number (called its Godel number) as follows: first assign each allowed character a number, then encode the string as 2^(number of first character) x 3^(number of second character) x 5^(...). E.g. if the language were English, you could encode the string "babe" as 2^2 x 3^1 x 5^2 x 7^5 = some large number. You could recover the word from the number by counting the 2's in its (unique) prime factorization to get your first character, then counting the 3's for your second character, and so on. Now you can start making self-referential statements within your language, you can say something like "the string with Godel number N has certain properties," which would mean "the string 'blah' has certain properties." You could drop "the string" as that's implied by the quotes.

It turns out that one of these sentences is essentially Quine's version of the liar's paradox:

- “Yields falsehood when preceded by its quotation” yields falsehood when preceded by its quotation.

Now the point is that none of this would have worked without the uniqueness of prime factorization. And it's hard to see why a physical theory of everything -- if it were axiomatizable -- would have prime factorization in it. Consider the following simple theory of everything, which is of course incorrect but is probably structurally similar to what the real one would look like -- it has a finite list of things, 1 through N, each with a charge and a mass; the forces acting on each of them in any given configuration follow from Maxwell's equations, and how they react to the force is determined by Newton's laws of motion. (Throw in gravitation as well, it doesn't matter.) The point is that you now have all the prerequisites for an axiomatic system in which one can ask all sorts of questions about what the velocities, positions, position-velocity correlations, etc. of the particles are at any time, i.e., this is Laplace's universe. The key point is that the minimal set of mathematical rules you need to make sense of this theory are rules about arithmetical operations on the real numbers. Although the reals include the naturals, they are infinitely simpler because everything divides everything else. Natural numbers do not enter the language of the theory at all -- yes, you need natural numbers to list the particles while constructing the language of the theory, but that's in the meta-language. Ergo no prime factorization, ergo no self-referential statements, ergo no incompleteness theorem. While we do not know quite what the theory of everything would be it seems almost certain to be structurally rather like the one I just described. (You might say, well, isn't quantum mechanics "grainy" and doesn't it therefore have something to do with the integers; the graininess doesn't enter into the fundamental equations, but even if this were the case, you only add and never multiply quanta so you wouldn't need all of Peano arithmetic so you wouldn't have an incompleteness theorem.)

I think this is all true so far as it goes but there's a potential loophole in the argument, which is that one might want to ask questions about the world that are not phraseable in the language of the theory. For example one might want to ask questions about thermodynamics, involving macroscopic quantities like pressure and temperature in my toy axiomatic system. I could, in principle, define pressure in terms of a sum over all the velocities of the particles near the walls divided by the number of particles (or something like that), but really what I want to talk about when I'm talking about pressure is a property that becomes well-defined only for systems, and I can't even construct my toy theory of everything for an infinite system. Therefore, what I'm really doing when I construct thermodynamics is setting up an entirely new axiomatic system by a process of taking limits over larger and larger versions of the original theory of everything.

Basically what this says is that a theory of everything in the ordinarily understood sense is not a theory in which a lot of the questions that physicists in practice work on would be well-defined. Could these "effective" theories, or the union of them, be strong enough to include PA? Maybe. In particular, a lot of natural-number arithmetic comes in from the topological properties of wavefunctions, the spaces they live in, etc. I'm still doubtful that you'd ever have to multiply natural numbers, but it's not beyond the bounds of possibility.

## 26 comments:

On reflection, I agree with what I take to be your overarching point, namely that Gödel's theorem doesn't render the a useful theory of everything impossible. (To be fair, I'm pretty sure I didn't make that argument to begin with, although it may have sounded like it.) But I agree for different reasons than you outline above.

When I was reflecting on this thread earlier, I had to remind myself that while a formal system of logic sometimes is used to describe a theory, perfectly valid theories need not rely on an axiomatic system. I'm not a fan of overly formalist approaches to thinking about the world. Mathematical proofs and logical arguments can be powerful tools, but they're only as useful as the map between your logical system and the real world. There is really nothing at all that says that the physical laws of the world must have an axiomatic form; so Gödel's result should be taken only as an indication of a fundamental weakness of

formalism, not anything problematic about the laws of reality or our understanding of those laws. (My original rumination, which may have seemed to come from an opposite point of view, was only that I found it interesting that the Victorian era's absolutist view of the world was breaking down in so many ways, because there are so many exceptions that absolutism can't really embrace. I don't have any problem with a useful theory of everything that reallyisa theory of everything, rather than a theory of everything but the weird exceptions.)Physicists have always been somewhat slapdash about their mathematics, seldom worrying much about whether results had formal justification as long as it worked. And I think they are right: where such justification was needed, it was eventually found. I can't think of a single instance where a subtle error in the mathematical underpinning of a good theory suddenly rendered the theory useless. In fact, you could argue that some of these subtle errors were extremely productive: for example, orbits aren't

exactlyellipses, but they are close enough to it to be the foundation of centuries of rather good astronomy. The small deviations from true elliptical orbits eventually helped validate relativity, which didn't overturn traditional astronomy, but provided significant refinements and new explanations. And there are lots of instances where mathematical expressions explaining physical phenomena were found well before there was any notion what was dictating the form of the expression - in fact often helped physicists figure out the physical nature of things (e.g. quantization of charge and energy).(con't) There is a certain irony in this whole exchange, because your arguments about Gödel's proof above are formally incorrect. I'll note why below, as well as what I perceive as the dangers of formalism in philosophy. But that is secondary. This whole odd proxy discussion (my original note to Dice, who forwarded it you, etc.) can be looked at in two different ways. From a formal standpoint, you could dissect the statements that you and I have made in attack and defense, and eventually "prove" that some are formally true and some formally incorrect or at least malformed. But do that ad nauseum and it still really doesn't touch the more fundamental human standpoint, which is the one that includes what's going on on a human level. On a human level, i.e. regarding us as human beings with feelings and imperfect cognition and so on, you were somewhat contemptuous (if not actually pissed off) at what you thought I said in my "new age spew" and thought it was too stupid even to clarify, much less debate. I'm a little defensive and also somewhat resigned that an obviously intelligent person like you doesn't get enough of what I was saying (or tried to say) to even engage. I could attack your arguments above below as "pseudomathematical drivel" or some such, and I'm sure we could happily flame one another all the live-long day. But at bottom we'd just be engaging in a silly pissing contest. It's much more interesting, and I would argue far more productive, to understand why we even care. I'm pretty sure that whatever is at stake here primarily concerns neither Gödel nor physical or philosophical TOEs.

As for your formal argument, I think a mathematician would say that you have to be very careful with assertions about formal systems. So three points to consider.

First, Gödel's theorem can be constructed in any axiomatic system that is sufficiently powerful: "Gödel's second incompleteness theorem shows that no formal system extending basic arithmetic can be used to prove its own consistency" (from the Wikipedia article on the Principia Mathematica of Russell and Whitehead). PM is the canonical example of what I called Victorian absolutism; as a logical positivist (I think that's what he's considered, although I could be wrong, but you get my meaning), Russell was very fond of the idea that logic can and should underpin all human progress. I cited the incompleteness theorem in my original note to Dice precisely because it is Gödel's results that demolished Russell and Whitehead's enterprise to come up with an axiomatic foundation for all mathematics. Valid iconoclasm is always interesting because in calling attention to the cracks, it at least suggests new outlines of what may be possible. Anyway, what you note about the Peano axioms is true as far as it goes, but they don't give only a partial system of arithmetic - i.e. they are less powerful than traditional arithmetic. Any system of axioms at is at least as powerful as arithmetic (i.e. any true arithmetic statement has an equivalent theorem derivable from the axioms) must be incomplete.

Second, I don't think prime factorization has anything to do with anything here. Prime factorization is a consequence of the existence of whole numbers and multiplication. It doesn't derive from an axiom (I suppose you could make an axiom that asserts it, but it would be pointless.) If you took normal arithmetic and weakened it into PA and

thenkilled multiplication, sure Gödel's theorems would be moot, but so would be nearly all of physics. How can you imagine doing physics without calculus or matrix mechanics? I'd be very surprised if you could come up with any useful reformulation say of quantum mechanics in a PA-style framework.A third point, which is just a consequence of Gödel's theorems but worth clarifying, is that speaking generally there isn't just one true statement that can't be proved; in sufficiently powerful systems, there could be an infinity of them. (To my knowledge nobody has ever shown the number of such problem statements to be finite, even if you exclude trivial extensions of the liar's paradox; as I understand it was attempts to plug holes by adding an axiom for each unprovable statement that led Gödel to his second theorem). Chaitin's work (following Turing, Kolmogorov and others) went much further to assert that not only are there problem statements such as these, but there are perfectly well-defined numbers that can never be computed, even in theory, and many of these correspond to problems in pure mathematics.

A final point about formalism. Maybe this is a worldview thing, but I think it's really odd to regard formal theories of either math or physics as somehow more fundamental than the people who came up with those theories in the first place. (Wittgenstein makes a similar criticism of the

Principia Mathematica). So I agree with the final point but in a much stronger form - I'd argue thatmost(not just some) of the interesting questions simply can't be asked in any formal system. Formal systems require formal languages, and there are all kinds of issues with formal languages, including that they don't permit the kind of ambiguity and flexibility of meaning that real human ideas and language employ routinely. Even simple systems of formal argumentation (i.e. language plus logical axioms plus logic that can be used to derive true statements from other true statements) have been proven to be interminable without special restrictions - i.e., you can't prove that an argument will ever terminate without exhausting the entire space of possible arguments, which can be infinite even in simple systems.All of which leads me simply to reiterate: argumentation, logic, math and theory are all really useful tools, but they're just tools. They're constructions of human imagination and ingenuity, and only meaningful to the extent the people using them make them so. You simply can't abstract away the squishy human factors without engaging in the fantasy of the disembodied non-human intellect. So I'm all for good and useful theories of everything, but they won't really be theories of

everythinguntil they somehow incorporate all the weirdness, pathos and contrariness of human existence. I don't say that's impossible, but I do think if it exists it will look somewhat more like Azimov's 'psychohistory' inFoundation, than the current representations of the physical world.Just to be clear, I'm claiming that the axiomatic system required to describe my toy theory of everything is _not strong enough_ to include Peano arithmetic. It is certainly not strong enough to include ZFC. The 2nd incompleteness theorem is beside the point; it's the first one you need; you need the hypothesis of PA for the first theorem; you don't need PA to describe my toy model of everything, because you don't need any notion of the natural numbers to formulate physics. To describe the real numbers, all you need are the field axioms plus some variant of the completeness property of the real numbers. You do not need any notion of natural numbers. The resulting theory is A LOT simpler, _and_ weaker, than number theory. (Just to remind you, a field is an extremely restrictive, simple, kind of ring, in which the second operation is both commutative and invertible. The integers are a nasty kind of commutative ring because the quotient of two integers isn't an integer, the reals are a field.)

This means that the hypotheses of the first incompleteness theorem do not hold for the toy theory of everything; therefore the first incompleteness theorem does not hold. The second incompleteness theorem is not terribly relevant here, because the original question was about unknowability rather than consistency proofs.

Godel's proof of the first incompleteness theorem absolutely does rely on unique prime factorization. See here: http://en.wikipedia.org/wiki/Proof_sketch_for_G%C3%B6del%27s_first_incompleteness_theorem

Of course a lot of my argument relies on the fact that, as of now, you don't need integers to formulate any physical theory. (Quantization is not in the fundamental equations, and therefore not in the language of the theory, it's an emergent property.) Even if you did have integers, there'd be some integer-valued quantities and some real- (or complex-) valued quantities; you would _never_ have to multiply the integers with each other.

I would agree, incidentally, that physics isn't a formal system, though this isn't related to the fact that physicists are slapdash about their math. The shoddiness is mostly a matter of (1) not caring about existence proofs, (2) not caring about convergence. This isn't really about axiomatics per se. To my mind the more basic issue is that a formal system requires a finite _language_, which implies a finite number of quantities, whereas physicists have traditionally been interested in all sorts of objects including those that can't be formalized in the original language. (e.g. my example of pressure.) Besides, the language of physics continually picks up new objects that are required to describe new experimental results.

As for the other issues:

1. I would probably not have posted this if I'd known you'd read it. I'd have phrased the first sentence differently if I had. I had no desire to engage your original arguments, which seemed (and still seem) wild extrapolations from questionable mathematics -- and they'd have been wild extrapolations even if the mathematics weren't questionable.

2. As for why I care: a. I think it's bad for intellectual discourse when people (and this includes major physicists like Bohr) make claims about general philosophical issues that they back up by appealing to (irrelevant) physics or math, because, as far as most people are concerned, this sort of appeal is an appeal to authority ("Science says..."). b. As for the specific argument in this post, I was thinking aloud about an issue in predicative logic that I find interesting.

3. In particular, I think the role of general intellectual conditions on mathematical practice can be overstated, and that historians have the bad habit of either leaning on the history to discredit the mathematics or (more irritatingly) vice versa. There's a desire to fit mathematical results into these pat morality tales about Victorians etc. that I think is on the whole pernicious and makes it hard to evaluate anything at all on the merits.

I should mention a potential objection to this construction. The field axioms specify a 0 and a 1 that are not equal to each other, so in principle, if you bundled a strong enough set theory into your axioms for the theory of everything, you might be able to construct the integers out of the real numbers using 0,1, and the + operation on the reals. But of course you don't _have_ to bundle a strong set theory into the theory of everything, and in particular you don't need a theory of functions that lives in the power set of the real numbers. The only sets that have to be specified for calculus to work are intervals, which can be specified in short strings using the < and > signs, unlike the natural numbers.

I should add, I guess, that I agree that theories of everything are not theories of _everything_: physics is ultimately about organizing empirical data in instructive ways. The contrary view is a straw man. As far as I can tell, however, this fact has no important philosophical consequences. In particular, from the POV of the human element, it doesn't seem to matter in the least whether there's one formal system or many overlapping ones (as is the case in science and perhaps in math). Even if Hilbert's program and Laplace's program had succeeded, there would have been enormous numbers of questions that could not have been answered during the finite interval of human existence. "Infinity" is an important concept only _formally_, and Godel's results are also important only formally. In practice, there are integers so large that it would take the age of the universe to write them; most integers are, of course, still larger than that; there are presumably regular patterns involving such numbers, but obviously no one would ever have been able to study them.

Nice post. I recently got into a long argument with someone defending Penrose's absurd claims about consciousness, so I feel your pain about cavalier uses of Godel's theorem.

That said, I'm pretty sure you can prove the incompleteness theorem without prime factorization. Godel's particular proof requires the uniqueness of prime factorization in order to recover formulae from their Godel numbers, but you can prove the theorem without recourse to Godel numbering.

The idea is that if we have an effectively decidable formal system powerful enough to prove whether or not some arbitrary Turing machine halts or doesn't halt, then we could solve the halting problem by just examining the proofs of the system until we find the one that says whether the machine under consideration halts.

This proof entails the incompleteness of arithmetic because the claim that a particular Turing machine halts could be written as a statement about integers. But does expressing whether or not a machine halts require the full resources of Peano arithmetic? I'm not so sure.

I haven't really thought about this much, so it's probably wrong, but say we assume there is an effective procedure for translating a Turing machine description into a description of some physical system instantiating the initial state of that machine. Say furthermore that we have an effective physical theory that takes physical descriptions of the initial state of a machine and tells us whether or not it reaches an equilibrium state, and if it does, whether that equilibrium state is the halt state. This theory could not possibly work for every machine description, or we'd have a solution to the halting problem.

Now of course this only places a restriction on the particular sort of theory I've envisioned, and it may be unlikely that our TOE will look like that. But the point is that this kind of theory need not be strong enough to express PA (as far as I can see, at least). So the general claim that we can only prove incompleteness results for theories that include arithmetic doesn't seem to be right.

Yes?

I think that whenever you try and translate machine language into language in axiomatic systems, you basically need the entire strength of PA -- or some other system that's strong enough to write a code in -- to do it. You need to reconvert the decidability talk into talk that's expressed in the object language of the relevant formal system, and to do that you need to arithmetize the object language somehow, so you can feed it back into itself; to do this, I'm pretty sure you need some result at least as strong as prime factorization. I know that the 1st inc. theorem is not true for PA without multiplication.

I take it that what you want to do is basically take a universal Turing machine, write it in terms of its atoms, use the putative TOE to initialize the atoms, have it evolve according to certain rules, and conclude from what its final physical state is whether the machine halted or not. One possible problem here -- and I haven't thought about this too hard so I'm not sure how much force this has -- is that my TOE would be time-reversible because the microscopic laws of physics are. In particular this might mean that "halting" is not a well-defined concept as the system's evolution would likely be cyclical with a very long period. To get thermodynamics in you might have to use the metalanguage (or greatly expand the object language or something).

I think that part of the problem is that in my theory of everything the universe is a finite-state machine rather than a Turing machine. I think this is almost certainly true about the actual universe.

Let me try and approach this slightly differently. Suppose we had a Turing machine designed to spit out truths about the universe using the TOE. Suppose the Turing machine is _in_ the universe. Then the statement "the Turing machine does not halt" is a statement that's utterable in the language of the TOE: contradiction.

But this is completely nonconstructive re the TOE, and gives no reason to believe anything about the nature of the TOE or what sorts of axioms it contains. What's probably illegitimate is to assume that this fake TOE for a universe with a Turing machine in it is sufficiently _like_ the TOE for our universe that we can assume that the fake TOE doesn't include PA.

I wasn't really thinking of a TOE that dynamically evolves the initial state of the computer to figure out whether it halts. If that were the structure of the theory then it wouldn't be able to tell us in finite time whether a particular machine doesn't halt. I was envisioning some theory that could look at initial states and divide them into halt/no-halt without having to actually evolve the state to find out. Don't really know how this would work. I told you I hadn't thought it through.

Yeah, fair point on the non-constructiveness of my argument.

A couple of tangential physics-related questions:

1. What do you mean when you say we only add quanta? Do you mean that all quantum numbers are additive? What about parity?

2. Do you really think the TOE (if there ever is one) will be time reversal invariant? You presumably rule out collapse interpretations of quantum mechanics, then?

1. Parity is additive mod 2. Generally, I wouldn't go so far as to deny that multiplying integers is never _useful_ in QM -- for instance, you build up the Hilbert space for many-spin systems through tensor products -- but this is meta-language talk. If we had a theory of everything it would come with a full Hilbert space.

2. Yes, I rule out collapse interpretations of QM because it's straightforward to get all the "collapse"-like phenomena we see from simple models of decoherence. OTOH the time evolution of _parts_ of an ever-expanding universe might seem irreversible, and it might not make much sense in this case to talk about the wavefunction of the entire universe.

argh, bloody double negatives

I'm very attracted to an interpretation of QM that treats the Schrodinger dynamics as complete and doesn't have any additional collapse or hidden variable postulate, but I can't bring myself to commit to it fully.

One sort of nebulous issue I have is the ontological weirdness of such an interpretation. I'd like the fundamental ontology of my theory to involve particles (or fields) moving around in spacetime, not a wavefunction moving around in configuration space. Especially since configuration space is so manifestly conceptually parasitic on regular 3-D space. It has a preferred co-ordinate system that only makes sense if you interpret the co-ordinates as positions in "real" space. Surely there's some sort of trade-off between the internal simplicity of the theory and the violence it does to our ordinary conceptual scheme. Collapse theories are surely uglier, but they're also less weird, no? I'm thinking of spontaneous collapse theories here, not the Copenhagen interpretation, which is both very ugly and very weird.

A more concrete issue is the role of probabilities. How do you make sense of probabilities in a decoherence-based interpretation? Why should I make bets as if I care more about future branches that correspond to greater amplitudes? I'm sure there are answers to these questions in the literature, but I haven't really been working on QM so I haven't come across them. I've heard that Deutsch has proved some sort of Savage-style decision theoretic result that says its rational to set our expectations according to the norm-squared measure. I haven't read the paper, but I can't imagine how he could have possibly shown this without begging the question.

Anyway, don't feel obliged to continue this, especially since we're no longer talking about the post. I'm just trying to avoid grading papers.

The choice between pure QM and GRW isn't entirely about prettiness as GRW is testable. I think GRW is worth looking for but if we don't find it at a reasonable scale I don't know what ontological status I ought to accord a postulate that I'd never have to use.

I don't know what you mean re coordinate space having a preferred basis.

As for probabilities: I know there are supposed to be paradoxes Out There but I really have no idea of what they are -- would be curious to know what they are though. If you buy decoherence then for all practical purposes QM reduces to CM for large objects, so the different paths don't interfere on relevant timescales, so many-worlds reduces to classical ignorance.

Re your main response to me: fair points all. What you characterize as wild extrapolations - I don't really disagree, although understand that as in your case my original comments weren't intended as a careful or persuasive argument about anything. I'm pretty sure I could restate most of my thoughts in a way that would at least clarify where I'm coming from (not that you would necessarily find that any more convincing). The crux of my position is that I don't really believe any claim, statement or thought (including theories, proofs, etc.) is objective, ever; there is always some context around it that is saturated with subjectivity. So I am much less interested in logic and logical correctness per se than what people try to do with it, so wild extrapolation doesn't bother me if gets you someplace interesting that isn't obviously just fantasy disconnected from any meaningful human experience. Maybe that's a kind of anti-intellectual laziness on my part. I find intellectual discourse to be empty when it doesn't connect with something that I can really care about (which tend to be more things like joy or suffering, and questions of the sort that tend to have no clear answers and probably never will). I tend to think of the experiential reality of people (sum of thoughts, feelings and experience) as generally more meaningful than the material reality that is the domain of physics. Anyway I claim no high ground of any kind on this; it's obviously ideosyncratic, and part of the reason why I could never be happy in the profession of physics even though I find the substance of physics to be still amazing.

That said, I wish I could understand how your toy TOE could function as an axiomatic system without something as strong as arithmetic. How do you talk about either wave or particle mechanics without multiplication, much less calculus? (You seem to indicate that you *can* do calculus only with sets of intervals.) How do you handle equations of motion? Is it the case that there is actually an equivalent axiomatic formulation that is weaker than PA? That seems impossible, but honestly the closest I got to this sort of thing was real analysis and group theory, and that was long enough ago I would be hard pressed to really follow either proof or disproof. The point is moot to the extent that whether a TOE is axiomatic or not seems to be mainly a question of language and formulation, independent of the goodness of the theory. It certainly would be interesting to have a valid TOE expressed in a logically simple way.

Overall, I appreciate your responses to my first reply. We'd probably have to agree to disagree on a lot of points but I applaud your explication and clarity. And generally agree with your opinions about pernicious tendencies, although I am (alas) at least somewhat guilty of having them myself, although I believe only occasionally, and with justification.

On configuration space and preferred basis: Take a point in configuration space and move away from that point in some direction. It seems to me that some directions are privileged in the following sense; moving along them corresponds to one particle moving (in 3D space) while the others stay stationary. Directions of this sort form the natural axes for the space. If you rotated the coordinate system you'll probably end up with a system where motion along an axis corresponds to a quite complicated set of motions of multiple particles in 3D space.

Think of a system of two 1-D particles, so a 2-D config space. If I pick the natural co-ordinatization of the space it's easy to construct the Hamiltonian. I just take the kinetic energy operator for each particle. But if I pick a rotated coordinate system it won't be that easy. I can't just take the Hamiltonian to be the sum of second derivatives along each axis; the expression will be way more complicated.

On probabilities and decoherence: Let's say I set up a Schrodinger's cat type experiment. There's an atom with a 2/3 chance of decaying within a certain period of time, killing the cat. For simplicity, let's assume there are only two relevant possibilities: decay or not. Ignore the possibility of decay at different instants.

Now what happens after the time has passed is that the atom is in a superposition of decay/non-decay states. Because of decoherence, these branches of the wave function separate in config space so there is essentially no possibility of interference. I open the box and on one branch I see a dead cat and on the other I see a living cat.

Now what should I be thinking before the experiment is performed, i.e. before the splitting occurs? If I take the quantum probabilities seriously I should be betting on eventually seeing a dead cat. But what does this mean? In some sense, I should expect to see both dead and alive cats in the future. After all, both branches contain me. It's not like I end up in one branch or the other (at least not if you think consciousness is purely physicsl). So my state prior to the experiment is not a state of uncertainty at all.

The analogy with classical uncertainty doesn't work here. With classical uncertainty I would know that only one of the possibilities is actual, but I would be subjectively uncertain about which one I would see. This is the situation I'd be in after the decay has occurred and before I open the box. But before the decay there is no uncertainty. I'm not uncertain about which branch I'll end up in - I know I'll end up in both. The only thing differentiating them is the greater amplitude of the "dead" branch. But how does this justify my betting behavior? Why should I care more about the higher amplitude version of me?

Matt: it isn't only about the operations, it's also about the set you're quantifying over. For instance you could have all the operations you want on the finite field {0,1} with modular arithmetic and it wouldn't get you anywhere towards proving Godel's claim. Similarly with the reals. If your underlying space is the reals with +,x, etc. and you have a weak set theory, there is still no way you can make statements about the _natural numbers_ in this theory, unless you add some fairly strong set-theoretic axioms. Peano arithmetic is a theory about arithmetic _on the natural numbers_ NOT on the real numbers. If your quantifiers "for all" and "there is" cannot range over the naturals but must range over all the reals you cannot do Peano arithmetic, so Godel's proof doesn't work.

Tarun:

re coordinate space, I think this is because you're thinking about two noninteracting particles. In the opposite limit of a molecule, the correct basis is to freeze out the relative coordinate and have a single degree of freedom. For strongly interacting systems you would choose some kind of nasty linear combination for the "dressed particle" etc. You want to be as close to the normal modes of the system as possible, whatever they are. As for why a free particle is so straightforward to write in the momentum basis, that is because momentum is a good quantum number, which is because space is isotropic by construction.

re decoherence: The "probability that the cat is dead" is always what it is, it's the relevant diagonal component of the reduced density matrix for the cat in the alive-dead basis. This is, if you like, the "propensity." All that happens when you measure the cat is that the off-diagonal elements oscillate and go to zero. (OK, this only works if you have an ensemble of cats. But probabilities only make sense w.r.t. ensembles.)

For macroscopic entities the situation re branching is simpler: the moment you branch, the environment measures you so you end up (if you trace over the environment) in a classical mixture, and the usual classical rules apply. In particular, your consciousness splits off into two branches, each observing its universe and never to meet again, so you really can think of yourself as classically choosing a fork in the road.

I don't disagree that after decoherence you can treat the probabilities like classical uncertainty. The problem is how to think of them before the decay.

Say I'm walking on a road that forks some distance ahead. One fork leads to a whorehouse and the other leads to a church. Say I also know that when I get to the fork I'll split into two and each version of me will take one path. Should I expect to end up at the whorehouse or the church? I can't see how this question has any meaning for me before the split. I know that I will end up at both. You can tell me that there's some parameter in your theory of splitting that assigns a weight of 2/3 to the whorehouse path and 1/3 to the church path, but this parameter does not connect in any obvious way to my subjective decision-making pre-split. In the absence of this connection I don't know why I should call it probability.

This seems to me a puzzle about consciousness and continuity rather than one about _measurement_ per se. Naively, a possible resolution is to assign a "you" to each consistent history, and say that the "you"s deviate when the consistent histories do, maybe because they get entangled with something. On this view, "you" before the measurement could be one of a large number of indistinguishable people, of which some fraction measure a live cat etc. This would interpret the probabilities, though probably at the expense of introducing some rather ghastly new complications.

Yeah, this is Dieter Zeh's "many-minds" interpretation. As you note, it involves ghastly complications. One of the advantages of decoherence based approaches over certain versions of Copenhagen is that they don't posit mind-body dualism. But many-minds seems to reintroduce this dualism. The physical stuff evolves deterministically according to Schrodinger's equation, but the minds or consciousnesses evolve stochastically.

This does not particularly bother me as the mind is some sort of emergent macroscopic object; as long as the theory makes sense without minds it seems unproblematic, as (assuming the mind is determined by physical properties) given a fully specified physical configuration I can figure out what it's thinking. I would say that my problem with Copenhagen is that it treats measurements-by-inanimate-environment and measurements-by-conscious-observer as different physical processes whereas they are obviously the same thing.

Post a Comment