An important way to characterize our models is in terms of their computational capabilities. We can always think of the evolution of one of our models as corresponding to a computation: the system starts from an initial state, then follows its rules, in effect carrying out a computation to generate a sequence of results.
The Principle of Computational Equivalence [1:12] suggests that when the behavior of our models is not obviously simple it will typically correspond to a computation of effectively maximal sophistication. And an important piece of evidence for this is that our models are capable of universal computation.
We saw above that our models can emulate a variety of other kinds of systems. Among these are Turing machines and cellular automata. And in fact we already saw above how our models can emulate what is known to be the simplest universal Turing machine [1:p709][94][95] [96]. We also showed how our models can emulate the rule 30 cellular automaton, and we can use the same construction to emulate the rule 110 cellular automaton, which is known to be computation universal [1:11.8].
So what this means is that we can set up one of our models and then “program” it, by giving appropriate initial conditions, to make it do any computation, or emulate any other computational system. We have seen that our models can produce all sorts of behavior; what this shows is that at least in principle our models can produce any behavior that any computational system can produce.
But showing that we can set up one of our models to emulate a universal Turing machine is one thing; it is something different to ask what computations a random one of our models typically performs. To establish this for certain is difficult, but experience with the Principle of Computational Equivalence [1:12] in a wide range of other kinds of systems with simple underlying rules strongly suggests that not only is sophisticated computation possible to achieve in our models, it is also ubiquitous, and will occur basically whenever the behavior we see is not obviously simple.
This notion has many consequences, but a particularly important one is computational irreducibility [1:12.6]. Given the simplicity of the underlying rules for our models, we might imagine that it would always be possible—by using some appropriately sophisticated mathematical or computational technique—to predict what the model would do after any number of steps. But in fact what the Principle of Computational Equivalence implies is that more or less whenever it is not obviously straightforward to do, making this prediction will actually take an irreducible amount of computational work—and that in effect we will not be able to compute what the system will do much more efficiently than by just following the steps of the actual evolution of the system itself.
Much of what we have done in studying our models here has been based on just explicitly running the models and seeing what they do. Computational irreducibility implies that this is not just something that is convenient in practice; instead it is something that cannot theoretically be avoided, at least in general.
Having said this, however, it is an inevitable feature of computational irreducibility that there is always an endless sequence of “pockets” of computational reducibility: specific features or questions that are amenable to computation or prediction without irreducible amounts of computational work.
But another consequence of computational irreducibility is the appearance of undecidability [107][108]. If we want to know what will happen in one of our models after a certain number of steps, then in the worst case we can just run the model for that many steps and see what it does. But if we want to know if the model will ever do some particular thing—even after an arbitrarily long time—then there can be no way to determine that with any guaranteed finite amount of effort, and therefore we must consider the question formally undecidable.
Will a particular rule ever terminate when running from a particular initial state? Will the hypergraphs it generates ever become disconnected? Will some branch pair generated in a multiway system ever resolve?
These are all questions that are in general undecidable in our models. And what the Principle of Computational Equivalence implies is that not only is this the case in principle; it is something ubiquitous, that can be expected to be encountered in studying any of our models that do not show obviously simple behavior.
It is worth pointing out that undecidability and computational irreducibility apply both to specific paths of evolution in our models, and to multiway systems. Multiway systems correspond to what are traditionally called non-deterministic computations [109]. And just as a single path of evolution in one of our models can reproduce the behavior of any ordinary deterministic Turing machine, so also the multiway evolution of our models can reproduce any non-deterministic Turing machine.
The fact that our models show computation universality means that if some system—like our universe—can be represented using computation of the kind done, for example, by a Turing machine, then it is inevitable that in principle our models will be able to reproduce it. But the important issue is not whether some behavior can in principle be programmed, but whether we can find a model that faithfully and efficiently reflects what the system we are modeling does. Put another way: we do not want to have to set up some elaborate program in the initial conditions for the model we use; we want there to be a direct way to get the initial conditions for the model from the system we are modeling.
There is another important point, particularly relevant, for example, in the effort to use our models in a search for a fundamental theory of physics. The presence of computation universality implies that any given model can in principle encode any other. But in practice this encoding can be arbitrarily complicated, and if one is going to make an enumeration of possible models, different choices of encoding can in effect produce arbitrarily large changes in the enumeration.
One can think of different classes of models as corresponding to different languages for describing systems. It is always in principle possible to translate between them, but the translation may be arbitrarily difficult, and if one wants a description that is going to be useful in practice, one needs to have a suitable language for it.