Part 2: Complexity and Information (excerpt from “Origin of Mind: a history of systems”)

So, how does transformation occur in a scale free universe? How do you talk about ‘what’ caused ‘what’ and ‘when’ if everything we currently know of is made of some unknown level of fine grained energy relationships? We cannot. Until we develop better measurement technology we will not know for sure what is down there. What we can say for sure is that the billions of different ‘things’ that we currently know constitute our universe, are all composed of what looks like the same matter-energy. We all know that a teaspoon is different than a bluebird, but we also know that the two are made of exactly the same matter-energy. What makes the bluebird different from the teaspoon, is not different elementary particles, it is how the very same elementary particles maintain different energy relationships. In fact, different energy relationships are what constitute the different forms of matter.

This investigation pursues how these different relationships were sampled, maintained and reproduced from a completely undifferentiated Big Bang to the vastly differentiated universe of today. However, presuming a scale free universe has important ramifications on how we choose to measure this differentiation.

How can we measure the accrued differentiation of the universe? Can we call this measure complexity, and if so, is the universe getting more or less complex over time? It started with a tiny uniform ‘hot soup’ of matter-energy and, “from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved” (Darwin, n.d.). Obviously it is getting more complex – isn’t it? Well, this would depend on how we define ‘complexity.’

Common sense definitions of complexity usually rely on measures of ‘more’ and ‘different.’ The more, different elements a system has, the more complex it is. Biologist Steven J. Gould refers to this measure of complexity when he describes complexity as the, “number and form of components” (Gould S. J., The Structure of Evolutionary Theory, 2002, p. 1264) (Gould S. J., Full House: the spread of excellence from plato to darwin, 1996). By his definition the more different components an organism has, the more complex it is.

On the other hand, physics and computer science definitions of complexity often refer either to information theory or algorithmic complexity. Peter Grünwald and Paul Vitanyi summarize the relationship between the two nicely:

“Both theories aim at providing a means for measuring ‘information.’ They use the same unit to do this: the bit. In both cases, the amount of information in an object may be interpreted as the length of a description of the object. In the Shannon approach, however, the method of encoding objects is based on the presupposition that the objects to be encoded are outcomes of a known random source—it is only the characteristics of that random source that determine the encoding, not the characteristics of the objects that are its outcomes. In the Kolmogorov complexity approach we consider the individual objects themselves, in isolation so-to-speak, and the encoding of an object is a short computer program (compressed version of the object) that generates it and then halts” (Grünwald & Vitanyi, 2004).

Note first that the terms ‘information’ and ‘complexity’ appear correlative here. They both measure information even though Kolmogorov calls his measure complexity. Shannon developed his definition first and his solution was to measure information as a relationship between the outcome that does occur and the probability distribution of what could have occurred. He called this statistical relationship ‘entropy’ because entropy, as previously defined by Boltzmann, was also a measure of a future outcome based on its probability in that environment. What Shannon was essentially able to do was to show how any universal system could be defined as a communication process. This process was the selection of a signal (what does occur) from a limited set of noise (what could occur).

Kolmogorov was unsatisfied with this solution. He was a computer scientist and thus wanted an absolute, computable measurement not a probability. He got it by limiting the definition of the system to what was inside it. For him, any system, or entity, is only as complex as the minimum number of bits needed to describe its components. By eliminating the need to measure a system’s relationship to its environment Kolmogorov invented a way to effectively count what something was. The result was that Kolmogorov complexity is essentially a measure of how random something is. This is because Kolmogorov complexity does not count the regularities, they are compressed, thus his measure is mostly a count of the random irregularities within a system.

Kolmogorov’s and Gould’s ideas of complexity are both similar in that they say nothing about how a system came to be structured as it is, but once it exists, they can measure its complexity. They both measure a static amount of structure. Shannon does not treat system structure as static, but as a dynamic relationship that unfolds over time. Unfortunately, Shannon’s solution just pushes the static assumption a little further down the road by assuming the ‘known random source” from which the outcome can be selected. To define Shannon information one needs to have absolute statistical knowledge of the closed system within which the signal is selected, its ‘environmental’ or ‘noise’ or ‘entropic’ potential. Kolmogorov and Gould count static amounts. Shannon counts statistical potential within a static limit.

Now consider how complex complexity can get. The universe contains no closed systems and no system in the universe is static. So how does one use such metrics to count what does actually occur. It turns out that both Gould and Kolmogorov metrics are great for measuring systems that already exist and can be easily idealized to static structures. For these measurements, one just needs to stop the universe (e.g. make it absolutely static), draw a box around what you think the system is, and then count. For Shannon metrics, one can measure dynamic transformations, but only in limited contexts (e.g. by making its potential to interact with its environment statistically static). In Shannon metrics the box is extended to be arbitrarily drawn around the environment the system is contained within.

Now before we consider the limits of static measurements in a dynamic universe let us briefly consider how to decide what to count. A deep flaw in all of these measurements is that they all require a human to decide what is countable and what is not. Do we need to count gluon states to determine the complexity of a one kilogram rabbit? If we did, then a one kilogram rock could be ‘more complex’ than our rabbit. It could have more and different quark and gluon relationships. And what about one kilogram of nebulae vapor?

Presumably, when Gould speaks of organismic complexity as a count of more and different parts he is only considering biological parts: organelles, cells, organs, etc. If Gould wants to count detail up to and including organelles, he chooses to ignore the molecular details within each organelle that makes it act uniquely in some situations. Gould is choosing to ignore fine grained complexity. Kolmogorov complexity also depends on an arbitrary human decision on how to represent the minimum constitutive part of the system to be computed. What exactly does each bit include, or more interestingly not include, and how could that fine grained and chaotic complexity influence the future.

This brings us back to the inability of amounts of static complexity or information to measure a dynamic universe. Both Gould and Kolmogorov complexity metrics are fundamentally limited to measuring structure that has already evolved and are arbitrarily ‘bounded.’ They contribute little to predicting future structure. Shannon information can be used to make limited predictions because it describes a process of how statistical relationships unfold over time. Numerous attempts have been made to use Shannon information to predict the kind of transformational structures that actually populate our universe, but little consensus has been reached (Grassberger, 1986, vol. 25) (Prokopenko, Boschetti, & Ryan, 2009).

Peter Grassberger does a wonderful job of illustrating the problem of defining information complexity with the set of three images at the top of this post. Here is what he says about them:

“Compare now the three patterns shown … [above]. Fig. lc is made by using a random number generator. Kolmogorov complexity and Shannon entropy are biggest for it, and smallest for Fig. la. On the other hand, most people will intuitively call Fig. lb the most complex, since it seems to have more “structure.” Thus, complexity in the intuitive sense is not monotonically increasing with entropy or “disorder.” Instead, it is small for completely ordered and for completely disordered patterns, and has a maximum in between (Hogg and Huberman 1985). This agrees with the notion that living matter should be more complex than both perfect crystals and random glasses, say.

The solution of this puzzle is the well-known ability of humans to make abstractions, i.e., to distinguish intuitively between “important” and “unimportant” features. For instance, when one is shown pictures of animals, one immediately recognizes the concepts “dog,” “cat,” etc., although the individual pictures showing dogs might in other respects be very different. So one immediately classifies the pictures into sets, with pictures within one set considered as equivalent. Moreover, these sets carry probability measures (since one expects not all kinds of dogs to appear equally often, and to be seen equally likely from all angles). Thus, one actually has ensembles: when calling a random pattern complex or not, one actually means that the ensemble of all “similar” patterns (whatever that means in detail) is complex or not complex. After all, if the pattern in Fig. lc were made with a good random number generator, the chance of producing precisely Fig. lc would be exactly the same as that to produce Figs. la or lb. (namely 2 -n, where N is the total number of pixels). If we call the latter more “complex,” it really means that we consider it implicitly to belong to a different ensemble, and it is this ensemble that has different complexity” (Grassberger, 1986, vol. 25).

Clearly, measuring complexity and information is an unresolved problem. Has the universe gained complexity in the sense of Kolmogorov randomness or in the sense of Gouldian numbers of different parts? And if the universe provides so many natural joints at which to cut it into understandable pieces, how do we choose which joints to use in which situations? Who gets to choose? Perhaps Shannon information allows for a prediction of what might emerge, but it too depends on what we choose to look for, and even more importantly, it still limits its predictive power to the known limit of the chosen system. Clearly, we do not yet know how to measure change in this universe.

The normal procedure in such a case is to choose a definition for complexity and then move on. This investigation proposes to redefine complexity as a relative, not absolute, measurement. Essentially, complexity is a measure of difference between systems, but systems are scale free so what is environment to one system is component to another. What is a regularity (e.g. signal) in one system can be noise at a smaller scale level. A cell is a regular component within your body which itself is made of many different fluids, boundaries and cells doing many different ‘noisy’ activities. That same cell is a noisy environment for its own regular organelles which are in turn noisy environments for regular proteins, and on to molecules, atoms, protons and quarks. The only rule to relative complexity is that environments are always more noisy than the systems that are maintained and reproduced within them. Outside is noisier than inside.

This definition is not, in fact, a definition. This kind of proposal is a way to start exploring what a new definition could be. Much of this investigation explores a different way of understanding relative change. It explores how to compare change at different scale levels. However, this exploration is descriptive, not deterministic. This investigation argues for a new way to understand universal evolution by showing how it evolved, by showing how atoms, prokaryotes and minds were selected for persistence, not by a deterministic algorithm or formula that proves it.

As mentioned, this is just a beginning. It is important to impress upon the reader the faults of the current framework and the need for an improved framework. To lay this foundation of doubt we need to take another short peek at another aspect of the complexity of ‘complexity.’

Part 1: A Scale Free Universe (excerpt from “Origin of Mind: a history of systems”)

matter-energy universeComing to terms with scale limits is like coming to terms with prejudice. Both beliefs limit our understanding of the universe. Enlightened people no longer believe in prejudice, but the same cannot be said for scale limits. Judging from the literature, many scientists and science writers believe that there is nothing smaller than elementary particles and nothing bigger than the Big Bang.

In experiment after experiment people of different ‘races’ have turned out to be equally capable and intelligent. Presumably the statistically equal distribution of intelligence not only applies to people over geographical space, but also over historical time. Presumably, people in the seventh century Persia were no ‘stupider’ than people in Philadelphia today. They may have ‘known’ less, but they were no less capable of knowing.

What does this tell us about scale in our universe? Let us take a look at the bottom end. Right now we believe that the bottom end of the scale is Planck’s constant and the sixty or so elementary particles. Everything that we know of is made of sixty or so tiny energy structures (Gell-Mann, The Quark and the Jaguar: adventures in the simple and the complex, 1995). We can’t see any of these structures, but we can measure their effects with high tech tools. By categorizing these effects we have been able to give these structures distinct names like ‘electrons’ and ‘quarks.’ In 2010 when this investigation was written, the bottom end of the scale of causality was these sixty or so ‘quanta,’ but this was not always the case.

At the beginning of the nineteenth century John Dalton re-proposed a classic type of elementary particle. He proposed that each chemical element was composed of tiny distinct ‘atoms.’ This was revised at the beginning of the twentieth century when the hydrogen nuclei was thought to be an ‘elementary’ particle. However, in 1911 Ernest Rutherford discovered that most nuclei contained ‘protons’ and ‘neutrons.’ In 1964 Murray Gell-Man and George Zweig discovered that these protons and neutrons were composed of ‘quarks.’ In just over one hundred and fifty years the bottom end of our scale of causality went from elements, to atoms, to nuclei, to protons, to quarks. The logical question this incites is; where does this all end? What indications do we have that our current ‘quanta’ are truly at the bottom of causality?

Science is founded on replication. If an experiment can be replicated it can be verified. If it is true that scientists in the twentieth century consistently found smaller scale levels, what about the nineteenth century? What about the eighteenth, seventeenth and sixteenth centuries? Can the twentieth century experiment be replicated? Do the equally clever scientists of each century equally believe that they know what the materia prima of the universe is? Have they all equally been proven wrong by the next generations of scientists? What would we find upon careful inspection?

We would find that the scientists of each century were all more or less equally intelligent (we wouldn’t want our experiment to be inherently prejudiced?). We would find that they were all equally convinced that their system explained causality because they are famous for claiming so. We would find that they were all proven more right than their predecessors and more wrong than their antecessors. And finally we would find that they were all equally restricted by the technology of their age because proof depends on exactness of measurement and this has continually improved.

If this investigation were of a different character we could take the time to conduct this experiment. From classical elementalism to medieval alchemy to enlightenment chemistry to modern cosmology, each age has been populated by intelligent men that believed they knew the materia prima at the bottom of causality. All of these men built theories that were not only logically consistent, but were also constrained by their technology. All of these theories were eventually improved when technology improved, and improvement in each case has meant defining smaller, more precise parts that compose the physical world around us.

This historical fact implies that unless we believe ourselves smarter than others, or we have invented perfect measurement technology, we should be very careful in our belief that we know the bottom end of causality. It means that future generations armed with better technology will continue to find smaller, more precise causal patterns. Quarks will be found to contain galaxies of smaller pattern, perhaps the miniscule ‘dimensions’ that string theories posit. Furthermore, there is no evidence that we know the top end of causality either. Every time we build a stronger telescope we see bigger, more distant patterns. Recently we discovered that the previously known universe is really just a tiny part of a vast universe of unknown dark matter and dark energy. Historically, there seems to be every reason to believe that the Big Bang universe we see now will soon be recognized as just another location within a far vaster ‘multiverse.’

The beliefs that everything is within the Big Bang and that nothing is smaller than quanta are probably wrong. We may or may not live in a scale free universe, but there is no reason to believe that the current measurable boundaries are the true boundaries. The history of science has consistently reminded us that there is no special place in space-time from which to measure scale. The discoverer of one of our current scale limits, Max Planck himself, “was warned by a professor of physics that his chosen subject (physics) was more or less finished and that nothing new could be expected to be discovered” (Kragh, 2002, p. 3). That was in the nineteenth century.

Humans have a natural inclination to limit their thought to within the paradigms that map their own location, size and time scales. Revolutionaries like Galileo, Newton, Planck, Einstein and Schrödinger proved these scale-limits wrong at every turn. Enlightened people should not believe in prejudice, whether it be against colour of skin, country of origin, or the fine graining of space-time.

For this reason, this investigation presumes a scale free universe. It takes Nobel Laureate Robert Laughlin’s position that the reason quanta are all ‘wave like’ is because they too are collective phenomena. They too are composed of many parts interacting in concert, like waves. Like everything else in this universe, quanta are communal. One of the longest running experiments known to science could be framed as, ‘Do we know the materia prima of the universe?’ Every century that scientists repeat this experiment the result is the same, a resounding ‘No.’ If a millennia of historical experiments are any indication, there is every expectation that quarks and electrons will soon be found to be the results of interactions at lower levels of causality. The quark itself will become understood as a “void for the greatest part, only interwoven by centers of energy” just as its antecedents were (Bertalanffy, General System Theory, 1968).
As our telescopes push the limits of our universe ever outward, our microscopes and particle accelerators push the limits ever inward. At every scale level we find pattern. There is no indication that the current level is special or definitive.