See this post for context.
A "universality class" is a basin of attraction, i.e., it consists of the set of microscopic models that coarse-grain to a particular fixed point. Universality classes are, of course, equivalence classes -- of sequences of models that coarse-grain similarly. This structure is to some extent analogous to the set-theoretic construction of real numbers from rational numbers (i.e., pairs of integers): a real number x "is" an equivalence class of Cauchy sequences of rationals (i.e., all sequences that converge to x). The analogy is admittedly not very good: the reals have binary operations on them, etc., whereas there isn't really anything analogous for models. However, I think it is good enough to get at the main point: viz. that when one talks about the properties of the set of equivalence classes of rational numbers, one is doing a different sort of mathematics from the theory of rational numbers: the theory is defined on a different set, so very different sorts of things are true -- reduction to lowest terms in one case, the extreme value theorem in the other -- and the "reduction" of one theory to the other is a reduction of analysis, not to number theory, but to number theory plus set theory. It is also well-understood in the mathematical case that the reduction is not useful in that it doesn't help you prove theorems about the reals; its only potential use is in consistency proofs (which are anyhow precluded by Godel's theorems).
A similar statement seems to be true in physics -- the theory of fixed points is a theory of equivalence classes of sequences of models; this is not a reduction of many-body physics to particle physics but rather to particle physics plus set theory on renormalization group flows. The clumpy, highly classified, scale-invariant space of macroscopic objects is not like the relatively smooth landscape of parameters allowed by the standard model (or the "landscape" in string theory): the reduction is "useless" in the same sense as above. This is closely connected to the intuitive point that coarse-graining doesn't preserve distances in parameter space (two very similar microscopic theories can have very different macroscopic limits, etc.), which is why microscopic theories do not constitute explanations of macroscopic phenomena. Batterman is, I think, correct to try to find more formal and precise ways of saying this than just saying that it's a "useful idealization" to think of emergent phenomena as existing -- while strictly speaking this is all that one can say, "useful" is an ambiguous word, and it is worth emphasizing, I think, that emergent phenomena are "useful idealizations" in the same way as real numbers are useful idealizations of the way we talk about rational numbers.
Although I don't understand the holographic principle terribly well, I should note John McGreevy's claim that the (d + 1)-dimensional holographic dual of a d-dimensional model can be understood as a stack of d-dimensional slices of the model at various stages under the renormalization group. (The d+1 dimensional universe has two boundaries: a surface corresponding to the original model, and a point corresponding to its fixed point.) I suspect that this only really works for the "AdS"-like models, which don't describe the large-scale structure of our universe, but it would be neat if the renormalization group had a "physical" interpretation.
No comments:
Post a Comment