If you haven't read it yet, I recommend
Batterman's article on the philosophical connection between emergent phenomena and singularities. It is nice to have philosophers taking the
renormalization-group idea seriously, as this idea has had an enormous impact on how physics is done and interpreted
by physicists -- at least by theorists -- but hasn't made it to the pop physics books or the undergraduate curriculum. Batterman correctly observes that physicists understand emergent phenomena in terms of the renormalization group, that the renormalization group concept needs limits (like that of infinite system size) to be made precise, and that the limits lead to singularities; he goes on to make what I think are some misleading statements about the interpretation of singularities. In this post I'll try to run through the usual argument and explain how I think the singularities ought to be interpreted.
I understand emergent phenomena in terms of the following analogy. Suppose you drop a ball onto a hilly landscape with friction, and ask where it will end up a very long time later. The answer is evidently one of the equilibrium points, i.e., a summit, a saddle point, or (most likely) a valley. Two further points to be made here: (1) It does not matter
where on the hillside the ball started out; it'll roll to the bottom of the hill. In other words, very different initial conditions often lead to the same long-time behavior. (2) It matters very much which side of the summit the ball started out on; small differences in initial conditions can lead to very different long-time behavior. So what constitutes an "explanation" of the properties of the ball (say its response to being poked) a long time after its release? One possible answer is that, because mechanics is deterministic, once you've described the initial position and velocity you've "explained" everything about the long-time behavior. However, this is unsatisfactory because point (1) implies that most of this "explanation" would be irrelevant, and point (2) implies that the inevitable fuzziness of one's knowledge of initial conditions could lead to radically indeterminate answers. A better answer would be that the explanation naturally divides into two parts: (a) a description of the properties (curvature etc.) of the equilibrium points, and (b) the (generally intractable) question of which basin-of-attraction the ball started out in. In particular, part (a) on its own suffices to classify all possible long-time behaviors; it reduces a very large number of questions (what does the ball smell like? at what speed would it oscillate or roll off if gently poked?) to a single question --
approximately where is it? (Approximate position typically implies exact position in the long-time limit, except if there are flat valleys.)
"Emergent" (or "universal") phenomena are descriptions of equilibrium points, i.e., answers to part (a) of the question. The renormalization group concept is the notion that the large-scale behavior of a many-body system is like the long-time behavior of a ball in a frictional landscape, in the sense that it is governed by certain "fixed points," which can be classified, and that theories of these fixed points suffice to describe the large-scale properties of anything. So, for instance, there are three states of matter rather than infinitely many. The analogue of time is the length-scale on which you investigate the properties of the system -- as you go from a description in terms of interacting atoms to one in terms of interacting blobs and so on -- and the analogue of the "loss of information" via friction is the fact that you're averaging over larger and larger agglomerations of stuff. (All of this is quite closely related to the central limit theorem.)
The role of infinite limits in the former case is obvious: if you start the ball very close to the top of the hill (where, let's say, the slope is vanishingly small), it'll take a very long time to roll off. So the fixed-point idea only really works if you wait infinitely long. However, it's also obvious that if you wait a really really long time and the ball hasn't reached its equilibrium, this is because it is near another equilibrium; so the equilibrium description becomes arbitrarily good at arbitrarily long times. (This is of course just the usual real-analysis way of talking about infinities.) The infinite-system-size limit is precisely analogous: while it only strictly works in the infinite-size limit, this "infinity" is not a pathology but is to be interpreted in the usual finitist way --
given epsilon > 0 etc. Epsilon-delta statements are true regardless of how far the series is from convergence, but they grow increasingly vacuous and useless as epsilon increases; something similar is true with dynamical systems and the renormalization group.
I should explain what this has to do with fractals, by the way. In the case of the ball, a fixed point is defined as a configuration that is invariant under the equations of motion; in the case of the many-body system, a fixed point is a configuration that is invariant under a change of scale, i.e., a fractal. A continuum object is, of course, a trivial kind of fractal; you can't see the graininess of it without a microscope, and it doesn't seem to have any other scale than the size of its container. Systems near phase transitions are sometimes nontrivial fractals -- e.g., helium at the superfluid transition is a fractal network of droplets of superfluid in a bath of normal fluid, or vice versa. Phase transition points, btw, correspond to ridges; if you move slightly away from them, you "flow" into one phase or the other. The association between unstable equilibria and nontrivial fractals is not an accident. Any departure from the nontrivial fractal (say in the helium case) leads to either superfluid or normal fluid preponderating at large scales; if you average on a sufficiently large scale the density of droplets of the minority phase goes to zero, and you end up in one trivial phase or the other.