So, how does transformation occur in a scale free universe? How do you talk about ‘what’ caused ‘what’ and ‘when’ if everything we currently know of is made of some unknown level of fine grained energy relationships? We cannot. Until we develop better measurement technology we will not know for sure what is down there. What we can say for sure is that the billions of different ‘things’ that we currently know constitute our universe, are all composed of what looks like the same matter-energy. We all know that a teaspoon is different than a bluebird, but we also know that the two are made of exactly the same matter-energy. What makes the bluebird different from the teaspoon, is not different elementary particles, it is how the very same elementary particles maintain different energy relationships. In fact, different energy relationships are what constitute the different forms of matter.
This investigation pursues how these different relationships were sampled, maintained and reproduced from a completely undifferentiated Big Bang to the vastly differentiated universe of today. However, presuming a scale free universe has important ramifications on how we choose to measure this differentiation.
How can we measure the accrued differentiation of the universe? Can we call this measure complexity, and if so, is the universe getting more or less complex over time? It started with a tiny uniform ‘hot soup’ of matter-energy and, “from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved” (Darwin, n.d.). Obviously it is getting more complex – isn’t it? Well, this would depend on how we define ‘complexity.’
Common sense definitions of complexity usually rely on measures of ‘more’ and ‘different.’ The more, different elements a system has, the more complex it is. Biologist Steven J. Gould refers to this measure of complexity when he describes complexity as the, “number and form of components” (Gould S. J., The Structure of Evolutionary Theory, 2002, p. 1264) (Gould S. J., Full House: the spread of excellence from plato to darwin, 1996). By his definition the more different components an organism has, the more complex it is.
On the other hand, physics and computer science definitions of complexity often refer either to information theory or algorithmic complexity. Peter Grünwald and Paul Vitanyi summarize the relationship between the two nicely:
“Both theories aim at providing a means for measuring ‘information.’ They use the same unit to do this: the bit. In both cases, the amount of information in an object may be interpreted as the length of a description of the object. In the Shannon approach, however, the method of encoding objects is based on the presupposition that the objects to be encoded are outcomes of a known random source—it is only the characteristics of that random source that determine the encoding, not the characteristics of the objects that are its outcomes. In the Kolmogorov complexity approach we consider the individual objects themselves, in isolation so-to-speak, and the encoding of an object is a short computer program (compressed version of the object) that generates it and then halts” (Grünwald & Vitanyi, 2004).
Note first that the terms ‘information’ and ‘complexity’ appear correlative here. They both measure information even though Kolmogorov calls his measure complexity. Shannon developed his definition first and his solution was to measure information as a relationship between the outcome that does occur and the probability distribution of what could have occurred. He called this statistical relationship ‘entropy’ because entropy, as previously defined by Boltzmann, was also a measure of a future outcome based on its probability in that environment. What Shannon was essentially able to do was to show how any universal system could be defined as a communication process. This process was the selection of a signal (what does occur) from a limited set of noise (what could occur).
Kolmogorov was unsatisfied with this solution. He was a computer scientist and thus wanted an absolute, computable measurement not a probability. He got it by limiting the definition of the system to what was inside it. For him, any system, or entity, is only as complex as the minimum number of bits needed to describe its components. By eliminating the need to measure a system’s relationship to its environment Kolmogorov invented a way to effectively count what something was. The result was that Kolmogorov complexity is essentially a measure of how random something is. This is because Kolmogorov complexity does not count the regularities, they are compressed, thus his measure is mostly a count of the random irregularities within a system.
Kolmogorov’s and Gould’s ideas of complexity are both similar in that they say nothing about how a system came to be structured as it is, but once it exists, they can measure its complexity. They both measure a static amount of structure. Shannon does not treat system structure as static, but as a dynamic relationship that unfolds over time. Unfortunately, Shannon’s solution just pushes the static assumption a little further down the road by assuming the ‘known random source” from which the outcome can be selected. To define Shannon information one needs to have absolute statistical knowledge of the closed system within which the signal is selected, its ‘environmental’ or ‘noise’ or ‘entropic’ potential. Kolmogorov and Gould count static amounts. Shannon counts statistical potential within a static limit.
Now consider how complex complexity can get. The universe contains no closed systems and no system in the universe is static. So how does one use such metrics to count what does actually occur. It turns out that both Gould and Kolmogorov metrics are great for measuring systems that already exist and can be easily idealized to static structures. For these measurements, one just needs to stop the universe (e.g. make it absolutely static), draw a box around what you think the system is, and then count. For Shannon metrics, one can measure dynamic transformations, but only in limited contexts (e.g. by making its potential to interact with its environment statistically static). In Shannon metrics the box is extended to be arbitrarily drawn around the environment the system is contained within.
Now before we consider the limits of static measurements in a dynamic universe let us briefly consider how to decide what to count. A deep flaw in all of these measurements is that they all require a human to decide what is countable and what is not. Do we need to count gluon states to determine the complexity of a one kilogram rabbit? If we did, then a one kilogram rock could be ‘more complex’ than our rabbit. It could have more and different quark and gluon relationships. And what about one kilogram of nebulae vapor?
Presumably, when Gould speaks of organismic complexity as a count of more and different parts he is only considering biological parts: organelles, cells, organs, etc. If Gould wants to count detail up to and including organelles, he chooses to ignore the molecular details within each organelle that makes it act uniquely in some situations. Gould is choosing to ignore fine grained complexity. Kolmogorov complexity also depends on an arbitrary human decision on how to represent the minimum constitutive part of the system to be computed. What exactly does each bit include, or more interestingly not include, and how could that fine grained and chaotic complexity influence the future.
This brings us back to the inability of amounts of static complexity or information to measure a dynamic universe. Both Gould and Kolmogorov complexity metrics are fundamentally limited to measuring structure that has already evolved and are arbitrarily ‘bounded.’ They contribute little to predicting future structure. Shannon information can be used to make limited predictions because it describes a process of how statistical relationships unfold over time. Numerous attempts have been made to use Shannon information to predict the kind of transformational structures that actually populate our universe, but little consensus has been reached (Grassberger, 1986, vol. 25) (Prokopenko, Boschetti, & Ryan, 2009).
Peter Grassberger does a wonderful job of illustrating the problem of defining information complexity with the set of three images at the top of this post. Here is what he says about them:
“Compare now the three patterns shown … [above]. Fig. lc is made by using a random number generator. Kolmogorov complexity and Shannon entropy are biggest for it, and smallest for Fig. la. On the other hand, most people will intuitively call Fig. lb the most complex, since it seems to have more “structure.” Thus, complexity in the intuitive sense is not monotonically increasing with entropy or “disorder.” Instead, it is small for completely ordered and for completely disordered patterns, and has a maximum in between (Hogg and Huberman 1985). This agrees with the notion that living matter should be more complex than both perfect crystals and random glasses, say.
The solution of this puzzle is the well-known ability of humans to make abstractions, i.e., to distinguish intuitively between “important” and “unimportant” features. For instance, when one is shown pictures of animals, one immediately recognizes the concepts “dog,” “cat,” etc., although the individual pictures showing dogs might in other respects be very different. So one immediately classifies the pictures into sets, with pictures within one set considered as equivalent. Moreover, these sets carry probability measures (since one expects not all kinds of dogs to appear equally often, and to be seen equally likely from all angles). Thus, one actually has ensembles: when calling a random pattern complex or not, one actually means that the ensemble of all “similar” patterns (whatever that means in detail) is complex or not complex. After all, if the pattern in Fig. lc were made with a good random number generator, the chance of producing precisely Fig. lc would be exactly the same as that to produce Figs. la or lb. (namely 2 -n, where N is the total number of pixels). If we call the latter more “complex,” it really means that we consider it implicitly to belong to a different ensemble, and it is this ensemble that has different complexity” (Grassberger, 1986, vol. 25).
Clearly, measuring complexity and information is an unresolved problem. Has the universe gained complexity in the sense of Kolmogorov randomness or in the sense of Gouldian numbers of different parts? And if the universe provides so many natural joints at which to cut it into understandable pieces, how do we choose which joints to use in which situations? Who gets to choose? Perhaps Shannon information allows for a prediction of what might emerge, but it too depends on what we choose to look for, and even more importantly, it still limits its predictive power to the known limit of the chosen system. Clearly, we do not yet know how to measure change in this universe.
The normal procedure in such a case is to choose a definition for complexity and then move on. This investigation proposes to redefine complexity as a relative, not absolute, measurement. Essentially, complexity is a measure of difference between systems, but systems are scale free so what is environment to one system is component to another. What is a regularity (e.g. signal) in one system can be noise at a smaller scale level. A cell is a regular component within your body which itself is made of many different fluids, boundaries and cells doing many different ‘noisy’ activities. That same cell is a noisy environment for its own regular organelles which are in turn noisy environments for regular proteins, and on to molecules, atoms, protons and quarks. The only rule to relative complexity is that environments are always more noisy than the systems that are maintained and reproduced within them. Outside is noisier than inside.
This definition is not, in fact, a definition. This kind of proposal is a way to start exploring what a new definition could be. Much of this investigation explores a different way of understanding relative change. It explores how to compare change at different scale levels. However, this exploration is descriptive, not deterministic. This investigation argues for a new way to understand universal evolution by showing how it evolved, by showing how atoms, prokaryotes and minds were selected for persistence, not by a deterministic algorithm or formula that proves it.
As mentioned, this is just a beginning. It is important to impress upon the reader the faults of the current framework and the need for an improved framework. To lay this foundation of doubt we need to take another short peek at another aspect of the complexity of ‘complexity.’