Part 3: Entropy, Energy and Order (excerpt from “Origin of Mind: a history of systems”)

Another problem with measuring complexity can be approached through a related concept, entropy. Entropy was defined first by Rudolf Clausius. He was interested in generating work from steam engines and found that, “Heat can never pass from a colder to a warmer body without some other change …” (Clausius, 1867). Over time this principle was found to be universal and it became known as the second law of thermodynamics. The second law states that in closed systems, randomness always increases. As a result many theorists think of entropy as the inevitable loss of order (Carroll, 2010, p. 30) (Kauffman, Reinventing the Sacred: a new view of science, reason and religion, 2008, p. 13) (McFadden, 2000, p. 132) (Lederman & Hill, 2004, p. 183). Everything just naturally falls apart.

If a fundamental law of this universe is that things fall apart then how does this apply to the Big Bang? What fell apart between then and now? What was ‘lost’ between the Big Bang and now? The second law is often interpreted as saying that ‘order’ was lost. But how does the loss of order lead to humanity? To know this we need to examine what ‘order’ means in a thermodynamic context. Thermodynamics is the study of heat exchange (thermo = heat, dynamic = change). Thermodynamic ‘order’ is a measure of heat change. This makes sense because the founders of thermodynamics were interested in producing work through heat exchange. The problem is that the heat differentials necessary to make work can never be permanent. Entropy always reduces the heat differential over time to bring the system to equilibrium and thus the impossibility of a perpetual motion machine. The second law states that order in the form of heat differentials is always lost.

So what is heat? Heat is a macroscopic measurement of microscopic motion. Groups of atoms and molecules that vibrate more, are hotter. When they vibrate they move, but relative to what? They move relative to each other. Since Einstein, there has been no fixed point from which to judge motion. Matter in motion is only ‘in motion’ relative to other matter. If motion is relative then heat is relative. Relative to what? Relative to everything else in the closed system, in the box. This, of course, begs the question of how fine grained you choose to measure differences inside your box, or between the box and its environment, its ‘boundary.’ Something can only be hot if something else is cold. There must be a relative differential to make both heat and motion meaningful. More relative motion in the box is ‘hotter’ only at the level of fine graining the box defines and only if there is a ‘cooler’ exterior. There is neither heat nor motion if there is no measurable difference created by delineating a boundary that defines this difference. Heat and motion are relative to how you fine grain the measurements that define them. If everything is the same temperature then everything is moving at the same speed relative to everything else. This is called equilibrium. It is the state towards which entropy relentlessly moves. But where do we stop counting what is counted as ‘everything’? Where do we draw the fine graining line?

In a nutshell, if both motion and heat are relative measures then the only really important measurement in entropy is difference. Entropy inexorably lowers heat/motion differentials. This puts us in a strange position for using entropy to describe the difference between the Big Bang and now. The initial state was a tiny ‘hot’ soup of energy. There were no heat differentials in the initial state. Everything was moving equally fast. Almost fourteen billion years later there are heat differentials everywhere. My coffee is moving faster than this sofa, the sun is moving faster than the Earth and your brain is hopefully moving faster than your toes. Differences are everywhere now. The question is, if there were no differences at the beginning and differences in temperature are always lost as per the second law, why do we have more differences now? Why isn’t the universe one big cooling gas cloud or crystal?

Scientists like Eric Chaisson and Fred Spier talk about the rise of complexity in the universe (Chaisson E. , 2001) (Spier, 2010). Other scientists like Sean Carroll talk about the loss of order (Carroll, 2010). No one denies the second law, but our understanding of how it applies to a universe that appears to be ‘complexifying’ is mysterious. This problem is often related the ‘arrow of time’ and it remains one of the great intractable problems in science. Many theorists try to get around this problem by saying that complexity (gain of order) is built up in some places only by displacing masses of entropy (loss of order) that makes the overall net loss of order fit the second law. This solution has two flaws.

First it presumes that the universe is moving into empty space and that empty space can be used as an entropy ‘garbage can.’ The problem is that the universe is not like a cupcake with a firecracker in it, where parts of the cupcake ‘move into’ the air around it. The universe isn’t moving into an absolute Newtonian empty space, away from a fixed point source. The universe is expanding, it is growing, everything relative to each other. Everything is accelerating away from everything else in proportion to its distance. The universe is like bread dough that is expanding to become a loaf. There is no empty space that can be filled with disordered matter-energy. Everything around us, including the ‘space’ was all here at the beginning only it is expanding. Empty space isn’t rushing into our universe to fill the ever widening gaps. Where would it come from?

The second problem with displacing entropy to explain the growth of complex order is even more impossible to explain away. No matter how you cut the cake, the result was that the universe moved from a thermodynamically undifferentiated state to a thermodynamically differentiated state. According to the second law of thermodynamics, ‘entropy,’ this should never happen. Correction, this should never happen in a ‘closed’ space.

But what if the universe were scale-free? What if the only box put around the universe was the illusory one we imagine by stopping our measurements at certain scale-levels? What if the only thing that ‘closes’ the universe system is the limits of our tests?

When Claude Shannon was developing information theory he was looking for a term for the measure of uncertainty in a message. He was going to call this measure the ‘uncertainty function’ but John Von Neumann recommended otherwise,

“You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage” (Tribus & McIrvine, 1971).

The problem has always been that concepts like entropy, complexity and order have been hard to define. The result has been unclear concepts used to describe theories that are unclear about the big picture. We have no theory of everything. To remedy this problem this investigation works towards a redefining of complexity. Like ‘entropy,’ we are looking for some way to measure the change from the Big Bang until now. This is extremely difficult and it is, to a deep extent, the impetus behind this investigation.

One thing that does seem likely is that information will play a key role. Shannon’s information theory is the only known metric for dynamic change. Canonically Shannon information is understood as the measure of the reduction of uncertainty by a receiver. Where entropy is the increase in uncertainty, or randomness, or noise within a spectrum of interaction; information is the increase in certainty, or regularity or predictableness within a spectrum of interaction. Information is the regularities within a less regular (i.e. noisy) environment (Deacon T. , Shannon-Boltzmann-Darwin: redefining information Part 1, 2008). This investigation tries to forge a bond between complexity and information so that we can understand the evolution of the universe not just as a loss of order, but also as a gain of ‘order,’ as defined by complexity and information. This investigation casts entropy and the loss of order as just one side of the coin the universe flips. The other side of entropy is positive symmetry and the gain of order. Both entropy and positive symmetry are universal and both are statistical steps into an unknown future.

So now back to the question. How do we measure the change in the universe between the Big Bang and now? Is the universe more complex or less complex now? The answer depends on scale. If you presume that there are no scale levels smaller than quanta then you can presume the initial state of the Big Bang was a uniform mix of energy, a point source. You just say there cannot be any differences between the different things in this point source because there are no different locations that can maintain different states within it.
Everything is the same because it is all in the same place. The amount of information needed to describe that point source is minimal. That is what a point source is, simple, easy to describe, it contains just one bit of information. Unfortunately, information can only exist as a relative measure between existing states and possible states. It only exists as measured against a noisy environment. What is the environment for the universe? The universe was a ‘point source’ relative to what?

Supposedly, this simple Big Bang ‘bit’ contained the whole universe. The question is, can we really believe that it was simple, that it had no fine-grained structure, the entire universe? This investigation presumes that if we were the size of a quark component and looked into the initial state we would find a whole universe of pattern. This is what a scale-free universe means, pattern at all levels. The result of this point of view is that there was an unknown level of differentiation packed into that tiny Big Bang. This unknown level of differentiation has changed into the current, known level of differentiation over the last fourteen billion years. Fortunately, this presumption ends entropy’s exclusivity.

Physical entropy is a one way process. It goes from heat difference to heat equality, from order to randomness, from information to noise. If the sender were the Big Bang and the current universe were the receiver, and entropy were the only rule, then the initial state should have been highly differentiated and rich in information. Over the last fourteen billion years we should have lost that information. We should be noise. We should be a gas cloud. Obviously, we are not.

The problem of defining complexity is the problem of, ‘compared to what?’ At what scale level do we arbitrarily stop counting differences and how do we choose to box, or limit, what we want to measure. Just because we cannot see fine grained pattern yet does not mean it does not exist. To presume that it does not exist because our current technology has limits is to commit the prejudice of previous centuries.

Was the initial state more complex than the current state? We don’t know. It didn’t have more matter-energy. It had exactly the same amount of matter-energy. Did this matter-energy have more different relationships than current matter-energy? This depends on the scale-level we measure those differences on. The bottom line is that a scale-free universe makes measures of complexity relative, not absolute. My ninety kilogram body may be considered more complex than a ninety kilogram rock because I contain more movement differentials. However, it may be considered less complex than ninety kilograms of vapourized rock because as this vapour disperses it will enter billions of increasingly randomly differentiated heat states. My body maintains a relatively limited, predictable and repetitive number of heat states. Does this mean that vapour is more complex than I am? When I imagine the pattern that constitutes me, it is not Grassberger’s perfect checkerboard image to the left, nor is it the maximum randomness image to the right. It is somewhere in between.

This investigation denies that the simplistic point of view represented by ‘the growth of entropy’ as represented by cosmologists like Carroll and ‘the growth of complexity’ as represented by Spier and Chaisson. In their places it takes a new look at how those relative differences that define our current universe came to exist. Entropy is only a part of this story. Complexity is only part of this story. This investigation introduces new ways to think about these concepts.

One of the keys to this new approach is reimagining ‘order.’ Order is not some absolute measure of heat difference or ‘complexity’ or randomness or information. Order is also relative. In this investigation ‘order’ is defined as the kind of persistent differences that have appeared since the Big Bang, absent any fatuous comparisons with an unknown ‘initial state.’ Order here is defined as those regularities in relative motion differentials that have accrued since shortly after the ‘initial state.’ At some point shortly after the ‘initial state’ there began to persist those certain regularities of relative motion that constitute matter as we know it. These regularities did not exist previous to this point and they continue to exist up until now. These regularities are what this investigation calls ‘order.’ The addition of billions of emergent types of regularities to this initial ‘order from the unknown’ is the story of humanity.

Part 2: Complexity and Information (excerpt from “Origin of Mind: a history of systems”)

So, how does transformation occur in a scale free universe? How do you talk about ‘what’ caused ‘what’ and ‘when’ if everything we currently know of is made of some unknown level of fine grained energy relationships? We cannot. Until we develop better measurement technology we will not know for sure what is down there. What we can say for sure is that the billions of different ‘things’ that we currently know constitute our universe, are all composed of what looks like the same matter-energy. We all know that a teaspoon is different than a bluebird, but we also know that the two are made of exactly the same matter-energy. What makes the bluebird different from the teaspoon, is not different elementary particles, it is how the very same elementary particles maintain different energy relationships. In fact, different energy relationships are what constitute the different forms of matter.

This investigation pursues how these different relationships were sampled, maintained and reproduced from a completely undifferentiated Big Bang to the vastly differentiated universe of today. However, presuming a scale free universe has important ramifications on how we choose to measure this differentiation.

How can we measure the accrued differentiation of the universe? Can we call this measure complexity, and if so, is the universe getting more or less complex over time? It started with a tiny uniform ‘hot soup’ of matter-energy and, “from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved” (Darwin, n.d.). Obviously it is getting more complex – isn’t it? Well, this would depend on how we define ‘complexity.’

Common sense definitions of complexity usually rely on measures of ‘more’ and ‘different.’ The more, different elements a system has, the more complex it is. Biologist Steven J. Gould refers to this measure of complexity when he describes complexity as the, “number and form of components” (Gould S. J., The Structure of Evolutionary Theory, 2002, p. 1264) (Gould S. J., Full House: the spread of excellence from plato to darwin, 1996). By his definition the more different components an organism has, the more complex it is.

On the other hand, physics and computer science definitions of complexity often refer either to information theory or algorithmic complexity. Peter Grünwald and Paul Vitanyi summarize the relationship between the two nicely:

“Both theories aim at providing a means for measuring ‘information.’ They use the same unit to do this: the bit. In both cases, the amount of information in an object may be interpreted as the length of a description of the object. In the Shannon approach, however, the method of encoding objects is based on the presupposition that the objects to be encoded are outcomes of a known random source—it is only the characteristics of that random source that determine the encoding, not the characteristics of the objects that are its outcomes. In the Kolmogorov complexity approach we consider the individual objects themselves, in isolation so-to-speak, and the encoding of an object is a short computer program (compressed version of the object) that generates it and then halts” (Grünwald & Vitanyi, 2004).

Note first that the terms ‘information’ and ‘complexity’ appear correlative here. They both measure information even though Kolmogorov calls his measure complexity. Shannon developed his definition first and his solution was to measure information as a relationship between the outcome that does occur and the probability distribution of what could have occurred. He called this statistical relationship ‘entropy’ because entropy, as previously defined by Boltzmann, was also a measure of a future outcome based on its probability in that environment. What Shannon was essentially able to do was to show how any universal system could be defined as a communication process. This process was the selection of a signal (what does occur) from a limited set of noise (what could occur).

Kolmogorov was unsatisfied with this solution. He was a computer scientist and thus wanted an absolute, computable measurement not a probability. He got it by limiting the definition of the system to what was inside it. For him, any system, or entity, is only as complex as the minimum number of bits needed to describe its components. By eliminating the need to measure a system’s relationship to its environment Kolmogorov invented a way to effectively count what something was. The result was that Kolmogorov complexity is essentially a measure of how random something is. This is because Kolmogorov complexity does not count the regularities, they are compressed, thus his measure is mostly a count of the random irregularities within a system.

Kolmogorov’s and Gould’s ideas of complexity are both similar in that they say nothing about how a system came to be structured as it is, but once it exists, they can measure its complexity. They both measure a static amount of structure. Shannon does not treat system structure as static, but as a dynamic relationship that unfolds over time. Unfortunately, Shannon’s solution just pushes the static assumption a little further down the road by assuming the ‘known random source” from which the outcome can be selected. To define Shannon information one needs to have absolute statistical knowledge of the closed system within which the signal is selected, its ‘environmental’ or ‘noise’ or ‘entropic’ potential. Kolmogorov and Gould count static amounts. Shannon counts statistical potential within a static limit.

Now consider how complex complexity can get. The universe contains no closed systems and no system in the universe is static. So how does one use such metrics to count what does actually occur. It turns out that both Gould and Kolmogorov metrics are great for measuring systems that already exist and can be easily idealized to static structures. For these measurements, one just needs to stop the universe (e.g. make it absolutely static), draw a box around what you think the system is, and then count. For Shannon metrics, one can measure dynamic transformations, but only in limited contexts (e.g. by making its potential to interact with its environment statistically static). In Shannon metrics the box is extended to be arbitrarily drawn around the environment the system is contained within.

Now before we consider the limits of static measurements in a dynamic universe let us briefly consider how to decide what to count. A deep flaw in all of these measurements is that they all require a human to decide what is countable and what is not. Do we need to count gluon states to determine the complexity of a one kilogram rabbit? If we did, then a one kilogram rock could be ‘more complex’ than our rabbit. It could have more and different quark and gluon relationships. And what about one kilogram of nebulae vapor?

Presumably, when Gould speaks of organismic complexity as a count of more and different parts he is only considering biological parts: organelles, cells, organs, etc. If Gould wants to count detail up to and including organelles, he chooses to ignore the molecular details within each organelle that makes it act uniquely in some situations. Gould is choosing to ignore fine grained complexity. Kolmogorov complexity also depends on an arbitrary human decision on how to represent the minimum constitutive part of the system to be computed. What exactly does each bit include, or more interestingly not include, and how could that fine grained and chaotic complexity influence the future.

This brings us back to the inability of amounts of static complexity or information to measure a dynamic universe. Both Gould and Kolmogorov complexity metrics are fundamentally limited to measuring structure that has already evolved and are arbitrarily ‘bounded.’ They contribute little to predicting future structure. Shannon information can be used to make limited predictions because it describes a process of how statistical relationships unfold over time. Numerous attempts have been made to use Shannon information to predict the kind of transformational structures that actually populate our universe, but little consensus has been reached (Grassberger, 1986, vol. 25) (Prokopenko, Boschetti, & Ryan, 2009).

Peter Grassberger does a wonderful job of illustrating the problem of defining information complexity with the set of three images at the top of this post. Here is what he says about them:

“Compare now the three patterns shown … [above]. Fig. lc is made by using a random number generator. Kolmogorov complexity and Shannon entropy are biggest for it, and smallest for Fig. la. On the other hand, most people will intuitively call Fig. lb the most complex, since it seems to have more “structure.” Thus, complexity in the intuitive sense is not monotonically increasing with entropy or “disorder.” Instead, it is small for completely ordered and for completely disordered patterns, and has a maximum in between (Hogg and Huberman 1985). This agrees with the notion that living matter should be more complex than both perfect crystals and random glasses, say.

The solution of this puzzle is the well-known ability of humans to make abstractions, i.e., to distinguish intuitively between “important” and “unimportant” features. For instance, when one is shown pictures of animals, one immediately recognizes the concepts “dog,” “cat,” etc., although the individual pictures showing dogs might in other respects be very different. So one immediately classifies the pictures into sets, with pictures within one set considered as equivalent. Moreover, these sets carry probability measures (since one expects not all kinds of dogs to appear equally often, and to be seen equally likely from all angles). Thus, one actually has ensembles: when calling a random pattern complex or not, one actually means that the ensemble of all “similar” patterns (whatever that means in detail) is complex or not complex. After all, if the pattern in Fig. lc were made with a good random number generator, the chance of producing precisely Fig. lc would be exactly the same as that to produce Figs. la or lb. (namely 2 -n, where N is the total number of pixels). If we call the latter more “complex,” it really means that we consider it implicitly to belong to a different ensemble, and it is this ensemble that has different complexity” (Grassberger, 1986, vol. 25).

Clearly, measuring complexity and information is an unresolved problem. Has the universe gained complexity in the sense of Kolmogorov randomness or in the sense of Gouldian numbers of different parts? And if the universe provides so many natural joints at which to cut it into understandable pieces, how do we choose which joints to use in which situations? Who gets to choose? Perhaps Shannon information allows for a prediction of what might emerge, but it too depends on what we choose to look for, and even more importantly, it still limits its predictive power to the known limit of the chosen system. Clearly, we do not yet know how to measure change in this universe.

The normal procedure in such a case is to choose a definition for complexity and then move on. This investigation proposes to redefine complexity as a relative, not absolute, measurement. Essentially, complexity is a measure of difference between systems, but systems are scale free so what is environment to one system is component to another. What is a regularity (e.g. signal) in one system can be noise at a smaller scale level. A cell is a regular component within your body which itself is made of many different fluids, boundaries and cells doing many different ‘noisy’ activities. That same cell is a noisy environment for its own regular organelles which are in turn noisy environments for regular proteins, and on to molecules, atoms, protons and quarks. The only rule to relative complexity is that environments are always more noisy than the systems that are maintained and reproduced within them. Outside is noisier than inside.

This definition is not, in fact, a definition. This kind of proposal is a way to start exploring what a new definition could be. Much of this investigation explores a different way of understanding relative change. It explores how to compare change at different scale levels. However, this exploration is descriptive, not deterministic. This investigation argues for a new way to understand universal evolution by showing how it evolved, by showing how atoms, prokaryotes and minds were selected for persistence, not by a deterministic algorithm or formula that proves it.

As mentioned, this is just a beginning. It is important to impress upon the reader the faults of the current framework and the need for an improved framework. To lay this foundation of doubt we need to take another short peek at another aspect of the complexity of ‘complexity.’

Part 1: A Scale Free Universe (excerpt from “Origin of Mind: a history of systems”)

matter-energy universeComing to terms with scale limits is like coming to terms with prejudice. Both beliefs limit our understanding of the universe. Enlightened people no longer believe in prejudice, but the same cannot be said for scale limits. Judging from the literature, many scientists and science writers believe that there is nothing smaller than elementary particles and nothing bigger than the Big Bang.

In experiment after experiment people of different ‘races’ have turned out to be equally capable and intelligent. Presumably the statistically equal distribution of intelligence not only applies to people over geographical space, but also over historical time. Presumably, people in the seventh century Persia were no ‘stupider’ than people in Philadelphia today. They may have ‘known’ less, but they were no less capable of knowing.

What does this tell us about scale in our universe? Let us take a look at the bottom end. Right now we believe that the bottom end of the scale is Planck’s constant and the sixty or so elementary particles. Everything that we know of is made of sixty or so tiny energy structures (Gell-Mann, The Quark and the Jaguar: adventures in the simple and the complex, 1995). We can’t see any of these structures, but we can measure their effects with high tech tools. By categorizing these effects we have been able to give these structures distinct names like ‘electrons’ and ‘quarks.’ In 2010 when this investigation was written, the bottom end of the scale of causality was these sixty or so ‘quanta,’ but this was not always the case.

At the beginning of the nineteenth century John Dalton re-proposed a classic type of elementary particle. He proposed that each chemical element was composed of tiny distinct ‘atoms.’ This was revised at the beginning of the twentieth century when the hydrogen nuclei was thought to be an ‘elementary’ particle. However, in 1911 Ernest Rutherford discovered that most nuclei contained ‘protons’ and ‘neutrons.’ In 1964 Murray Gell-Man and George Zweig discovered that these protons and neutrons were composed of ‘quarks.’ In just over one hundred and fifty years the bottom end of our scale of causality went from elements, to atoms, to nuclei, to protons, to quarks. The logical question this incites is; where does this all end? What indications do we have that our current ‘quanta’ are truly at the bottom of causality?

Science is founded on replication. If an experiment can be replicated it can be verified. If it is true that scientists in the twentieth century consistently found smaller scale levels, what about the nineteenth century? What about the eighteenth, seventeenth and sixteenth centuries? Can the twentieth century experiment be replicated? Do the equally clever scientists of each century equally believe that they know what the materia prima of the universe is? Have they all equally been proven wrong by the next generations of scientists? What would we find upon careful inspection?

We would find that the scientists of each century were all more or less equally intelligent (we wouldn’t want our experiment to be inherently prejudiced?). We would find that they were all equally convinced that their system explained causality because they are famous for claiming so. We would find that they were all proven more right than their predecessors and more wrong than their antecessors. And finally we would find that they were all equally restricted by the technology of their age because proof depends on exactness of measurement and this has continually improved.

If this investigation were of a different character we could take the time to conduct this experiment. From classical elementalism to medieval alchemy to enlightenment chemistry to modern cosmology, each age has been populated by intelligent men that believed they knew the materia prima at the bottom of causality. All of these men built theories that were not only logically consistent, but were also constrained by their technology. All of these theories were eventually improved when technology improved, and improvement in each case has meant defining smaller, more precise parts that compose the physical world around us.

This historical fact implies that unless we believe ourselves smarter than others, or we have invented perfect measurement technology, we should be very careful in our belief that we know the bottom end of causality. It means that future generations armed with better technology will continue to find smaller, more precise causal patterns. Quarks will be found to contain galaxies of smaller pattern, perhaps the miniscule ‘dimensions’ that string theories posit. Furthermore, there is no evidence that we know the top end of causality either. Every time we build a stronger telescope we see bigger, more distant patterns. Recently we discovered that the previously known universe is really just a tiny part of a vast universe of unknown dark matter and dark energy. Historically, there seems to be every reason to believe that the Big Bang universe we see now will soon be recognized as just another location within a far vaster ‘multiverse.’

The beliefs that everything is within the Big Bang and that nothing is smaller than quanta are probably wrong. We may or may not live in a scale free universe, but there is no reason to believe that the current measurable boundaries are the true boundaries. The history of science has consistently reminded us that there is no special place in space-time from which to measure scale. The discoverer of one of our current scale limits, Max Planck himself, “was warned by a professor of physics that his chosen subject (physics) was more or less finished and that nothing new could be expected to be discovered” (Kragh, 2002, p. 3). That was in the nineteenth century.

Humans have a natural inclination to limit their thought to within the paradigms that map their own location, size and time scales. Revolutionaries like Galileo, Newton, Planck, Einstein and Schrödinger proved these scale-limits wrong at every turn. Enlightened people should not believe in prejudice, whether it be against colour of skin, country of origin, or the fine graining of space-time.

For this reason, this investigation presumes a scale free universe. It takes Nobel Laureate Robert Laughlin’s position that the reason quanta are all ‘wave like’ is because they too are collective phenomena. They too are composed of many parts interacting in concert, like waves. Like everything else in this universe, quanta are communal. One of the longest running experiments known to science could be framed as, ‘Do we know the materia prima of the universe?’ Every century that scientists repeat this experiment the result is the same, a resounding ‘No.’ If a millennia of historical experiments are any indication, there is every expectation that quarks and electrons will soon be found to be the results of interactions at lower levels of causality. The quark itself will become understood as a “void for the greatest part, only interwoven by centers of energy” just as its antecedents were (Bertalanffy, General System Theory, 1968).
As our telescopes push the limits of our universe ever outward, our microscopes and particle accelerators push the limits ever inward. At every scale level we find pattern. There is no indication that the current level is special or definitive.

Chapter 1

Where is My Mind?

“Where is my mind

Where is my mind

Wheeeee-ere is my mind …”

The Pixies

The riddle of the human mind has tantalized us for centuries. While the universe has steadily surrendered her secrets to bigger telescopes and better microscopes, the problem of the human mind has remained recalcitrant. The current approach to this problem is through the brain. Where else would the mind live? How else could it exist? Unfortunately, despite more than twenty years of intensive investment in the modern brain sciences, we still know precious little about how the brain creates subjective human experience. How and why neurons sustain the flux and reflux of hopes, fears and desires in the forms that imbue our daily experience with oh-so-human vim and vigor remains a one of humanity’s deepest mysteries. Perhaps we just need a better microscope, or, perhaps we need another approach.

This investigation tracks down the ‘mind’ in an unexpected way. It starts with the Big Bang and, step by step, it shows how each major universal transformation shares common features with previous transformations. Taken together, these common features constitute a pattern. This investigation shows how the emergence of the human ‘mind’ fits into this pattern of transformation. What a mind could be, its features and capacities, becomes evident when its emergence makes sense in a universe that has been producing different systems the very same way for billions of years.

This book is the pursuit of human nature through deep time.  It is a historical explanation for how the human mind evolved. It proposes an adaptation of Darwin’s logic of natural selection and then applies this new version of an old logic to the natural history of the universe. Thus, the focus of this investigation will be much broader than the canonical human brain. Its larger focus will be on the evolution of order in general.[1] Currently, there is no theory for how the universe gained order. This investigation proposes that emergent order is always built the same way. It demonstrates how the structures at each level of historical emergence are unique, but the process by which they emerge has always been the same. Essentially, the proposal is that emergence is a process that is common to each of the major transitions the universe has experienced. The candidate definition of emergence is:

Selection for symmetric persistence from expanding spectrums of interaction.

The universe is an expanding spectrum of interaction which means that it has always been interacting with itself, so that it continually samples new interactions. These new interactions are subject to a kind of broad scale selection process, much like Darwinian natural selection, where only those interactions that are symmetric[2] end up populating our universe. Thus, only symmetric relationships are selected to persist. One could reformulate this hypothesis in layman’s terms as, ‘anything that can happen does happen, but only those things that happen regularly, exist.’[3]

Unpacking this hypothesis will be the leitdifferenz (guiding difference) of this journey from the primordial Big Bang to the origin of mind. At each step we will see how cutting edge research from apparently unrelated fields is producing a body of evidence that supports the principle of selection for symmetric persistence from expanding spectrums of interaction.

This investigation itself is organized into chronological sections that examine how different types of systems have emerged over the entire history of the universe. One of the challenges of such an extensive project is establishing a balance between consistency, rigor, and readability. The first challenge is to establish a lexicon that is capable of not only unifying concepts from different fields, but of remaining readable and thus capable of navigating the vast and subtle conceptual landscapes that different academic domains have developed. Error on the side of too much rigor leads to extensive philosophical tracts that can bog down the reader, while error on the side of too much readability leads to dismissal on grounds of frivolity. The first section walks the tightrope between these two towers of error by introducing a toolbox of concepts that are rigorous enough for experts, yet useful and consistent enough to allow the reader to navigate foreign scientific landscapes. In this section ancient concepts like transformation, thought and time will be reexamined, newly adapted concepts like symmetry will be expanded, and completely new concepts like the reactive face and propagating symmetry will be introduced. Readers who are uninterested in philosophical foundations are invited to start with “SECTION 2: Universe”, but they may find themselves backtracking to nail down terms.[4]

The second section reviews cosmic origins. It starts thirteen point seven billion years ago with the Big Bang and shows how quantum, chemical and astronomic structures are related through a common process of emergence that is reminiscent of Darwinian evolution. This section proposes a new working definition for emergence and it shows how the historical transitions from quantum to chemical to astronomical systems embody this emergence.

The third section reviews how Earth conditions gave rise to the emergence of life in the last four billion years. It shows how the major transitions in biology are related to the major transitions in cosmology. In this section group selection is revised, expanded and compared to John Maynard Smith and Eörs Szathmáry’s list of Major Transitions in Evolution. The evolutionary histories of autocatalysis, binary fission, mitosis and meiosis are compared and the result is the establishment of a series of biological group selection events that represent the same process of emergence established in the previous section. Basically, the major biological transitions revolutionized their environments through the same process that structured the major universal transitions.

The fourth section reviews the evolution of humans and symbolic language over the last two million years. It demonstrates how the transition from proto-humans to symbolic-humans fits seamlessly into cosmological and biological history (e.g. sections 2 and 3). In this section we examine the evolution of vocal/aural senses in bats and whales (e.g. echo-location) and show how they share design features with the evolution of language. The result is that both the emergence of Homo sapiens and symbolic language can be interpreted as fitting in to a long cosmological and biological history of emergent group selection events.

The fifth and last section focuses on the origin of the human mind in the last hundred thousand years as it could be anticipated by billions of years of similar universal and biological transformations. The universe has a long history of selectively maintaining novel communicative interaction that reproduces novel levels of cooperative systems. The human mind is part of this long history. Binary fission is the language of bacterial persistence. Mitosis is the language of protist persistence. Meiosis is the language of metazoan persistence. Social symbolism is the language of human persistence. In each case a new way to store and access what-has-worked-before has been added to the emergent organisms’ network of maintenance and reproduction. These new reproductive languages are preserved (however and whenever they occur) because they extend the adaptive futures of their emergent organisms. Many of the current mysteries of the mind can be clarified by understanding how the human mind emerges, maintains and reproduces itself. This section demonstrates how, in many ways, the transition to mind is no different than the preceding transitions.

The journey from the Big Bang to mind will require tracking current thinking in a number of scientific disciplines. Fred Spier’s Big History and the Future of Humanity and David Christian’s Maps of Time have introduced us to the ‘Big History’ format, the fourteen billion year grand vista of what happened when. Unfortunately, ‘why’ is an altogether different question. Frustrated with the slow progress cosmology is making on the question of ‘why this universe and no other,’ physicists like Lee Smolin and Robert Laughlin are breaking new ground by expanding the ‘reductionist’ context to include evolutionary and communal emergence concepts. At another level, biologists like Stuart Kauffman are expanding the biological context by demonstrating how life could emerge from chemical diversity – in his words, ‘order for free.’  Physicist Per Bak even showed us exactly what the conditions for self-organizing ‘order for free’ look like. In different buildings, all the way across campus, conceptual tools from the cutting edge of all these sciences are being adopted by cognitive scientists.  Both Terrence Deacon’s seminal work on symbolic origins and Daniel Dennett’s valiant attempts to naturalize the human mind synthesize concepts from chaos, complexity and evolutionary theories. One way or another, all of these researchers are addressing the fundamental question of ‘why we are the way we are?’ This investigation is based on research from all these sources, but it takes a big history view. The answer to how the universe makes a mind is founded in the broader question of how the universe makes order.

This investigation proposes a hypothesis for how the universe makes order. This hypotheses is based on a somewhat well known, but rarely applied, fundamental conceptual transition; ‘things’ are ‘processes.’ Looked at from the point of view of ‘things,’ (like quarks, suns and cells), the universe looks unconnected. These ‘things’ are our objects of study. Their structural differences are the foundations of our separate academic disciplines. When one looks at the different categories of our objects of study, the universe looks like a grab bag of unrelated and mysteriously emergent structures. Examined from the point of view of processes, however, the universe presents a consistent pattern of emergence. This pattern is not evident in the systems themselves, but in the way these systems came to persist – in their process of emergence.

In principle, the hypothesis of emergence proposed here is applicable to all major universal transitions, past and future (e.g. quantum, chemical, astronomical, biological, etc.). If any major new type of universal system could be shown to emerge outside of this pattern, this hypothesis would have to be discarded. Much of this investigation will be dedicated to demonstrating how the main types of systems that constitute our universe emerged from this pattern. However, once the pattern is established it will give us an evolutionary platform from which we can understand the origin of the human mind.

One the most curious results of this investigation, at least for the cognitive sciences, is that the brain loses its power to explain the mind. From an evolutionary point of view the human brain is but one relatively recent link in a long chain of emergence. More important than the wet wiring of the brain (e.g. the ‘neural correlates of consciousness’ or the ‘connectome’)[5] is the process that the mind is engaged in – its function within the unfolding of the biosphere and the unfolding of the universe. Ultimately, understanding the neural structure of the brain will give us as little information about how our minds fit into this unfolding process as the number ‘233’ gives us about the Fibonacci sequence.[6] Alone, both the number and the brain tell us very little about the systems they help maintain and reproduce. In the case of the number ‘233,’ the structure of the number is meaningless taken out of its developmental context. It is the pattern itself – the process of its transformation – that makes the number 233 meaningful within the Fibonacci sequence. Likewise, it is not the structure of the human brain that determines what it means to have a mind. While a brain is necessary, it is also far from sufficient. What is sufficient is the process that the human brain has learned to maintain and reproduce. The human mind makes sense only within the transformational logic of how it is emerging into this universe. Only within the long and illustrious history of this transformational logic can we find what it means to have a mind, and to be human.

This investigation is a scale-free, state of the art enquiry into natural origins. It is a new point of view, a substantial expansion of the Darwinian paradigm. It places human evolution in perspective with the history of life-on-Earth, and within the history of the universe. The aim is to show that universal evolution has been a process of selection for symmetric persistence from expanding spectrums of interaction for almost fourteen billion years. This process has ordered the emergence of all matter/energy since the Big Bang and, at a higher resolution, human emergence and the human mind are parts of this very same universal process.

 

“… from so simple a beginning endless forms most beautiful and most wonderful have been, and are being evolved.”

Charles Darwin

 

[1] See Appendix 1 for a definition ‘order’ in the context of energy, entropy, information and complexity.

[2] The term symmetry will be defined in the traditional mathematical and physical way as, ‘transformation with preservation of structure.’ This definition is discussed in the chapter, “Symmetry.”

[3] Hat off to Jeff Forshaw and Brian Cox for the phrase, “anything that can happen, does happen.” (Cox & Forshaw, 2011)

[4] The project of defining necessary terms for a project such as this deserves a book in and of itself. The option of putting these fifty or so pages at the beginning was an act of triage. The risk of losing readers’ interest in an early metaphysical discussion was deemed preferable to the possibility of losing their confidence in the latter process of operation and recovery.

[5] The ‘neural correlates of consciousness’ is a reference to the neurobiological search for the neural underpinnings of conscious thoughts as suggested by Francis Crick and Christof Koch. The ‘connectome’ refers to the recent effort to map the brain’s neural architecture as proposed by Olaf Sporns.

[6] The Fibonacci sequence is a mathematical sequence first recorded by Leonardo Fibonacci where the addition of the last two numbers gives the next number in the sequence (e.g. 0,1,1,2,3,5,8,13, … 233, …). The Fibonacci sequence is closely related to Adolf Zeising’s famous ‘golden ratio’ and is particularly evident in the ontological development  of many organisms (Goodwin, 1994).