By John Ringland (www.anandavala.info)
(#1663)System Theoretic Metaphysics
(#1430) Metaphysics of Virtual Reality
(#1406) Computational Metaphysics
(#1415) SMN, Free Will and Unification of Paradigms
(#1418) SMN, Computational Metaphysics, Free Will and Duality
(#1427) Labels, Essence, Awareness, Computation, SMN
(#1428) Free Will, Attitude, Awareness, Self Control, Causality, Karma, Cosmic Will, Computation and Consciousness
Mathematics of Intension
(#1437) The Chinese Room, Experience, Knowledge and Communication
(#1470) Religion/Spirituality, Energy/Information and the Unification of Material and Spiritual Science
Also see other excerpts from my discussions with the Society for Scientific Exploration.
Main issues addressed in this discussion:
Is a general ‘process’ or existential phenomenon conceptually equivalent to a computational process? Can this reality be thought of as a computational process?
(I propose Yes)
Does a computational metaphysics imply a deterministic (clockwork) universe? Is everything pre-determined?
(I propose No)
Regarding the first question, I argue that any system that can be characterised by a finite set of state variables can be represented as a state space of all possible configurations and a state transition mapping can be defined over that space. This is a form of computational process, which is described here, that can represent general systems. I show that the space of algorithmic processes is a subset of the set of general computational processes and that the space of general computational processes is equivalent to the space of general processes. Hence, although algorithmic processes are not able to implement all general processes, general computation can implement all general processes. Therefore, although not all general processes are equivalent to an algorithmic computational process, all general processes are equivalent to general computational processes.
SMN is a deterministic transcendent process that manifests an empirical existential space. This empirical space is a space of pure potential existence – without any pre-determined structure. It is a space of pure potentiality within which any kind of universe can be represented.
Regarding the second question, I argue that within the space of potential existence systems may exist. In the case of non-ergodic systems the state of these systems evolves according to what exists and happens (existential and causal data), thus the state of what happens (causal relations) also evolves – thus the empirical space is self-programming, (this is subtle, see later for details). It relates to the fact that any activity has two aspects, the active ‘process’ (or existential phenomenon) and the program (the data that defines the process). A process is determined by its program but the program may also be determined by the process, hence the process can be self-determined.
The empirical universe is still determined moment by moment but it is also self-determined rather than pre-determined. The moment by moment determinism leads to causal coherence and the self-determinism leads to the ability to both exercise one’s will and to decide one’s will. Hence one can choose one’s state of being, hence one is able to determine one’s own conditions of experience; it is this that underlies the concept of free will. Thus a deterministic transcendent process creates a programmable existential space in which empirical systems may determine their own existential states. The entire context is itself deterministic but from the perspective of the empirical systems it is the case that they can exert free will – they can act upon their will, they can decide their will, they can decide on how to decide their will and so on. The existential context is completely self-programmable and there is no deterministic algorithm determining the nature of empirical existence.
Some of the details of the mathematics of non-ergodic systems are still in development so this second section is still a work in progress and is presented solely as an outline of the conceptual argument.
A re-expression of the first question:
Is ‘computation’ narrowly defined as deterministic rule based algorithmic processing of data? Or can it be more broadly defined as the transformation of information and thereby related to any conceivable process.
To prove that processes ARE NOT computational it would be sufficient to identify a single system that could not be represented by any computational process. There are systems that are not representable by any algorithmic, rule based process (narrowly defined computation) but is there some other computational process that can represent them?
To prove that processes ARE computational it would be necessary to define a computational process that can represent any conceivable system therefore any conceivable system can be equivalently thought of as a computational process.
In this section I provide the general outline of a proof that all processes can be thought of as computational processes. I define a computational process that can emulate any conceivable system. This is just an outline since only the full implementation or detailed characterisation of such a computational process would constitute a complete proof.
SMN is complex and subtle so I will slowly introduce it by degrees as required by this discussion. For a more complete introduction see the website (SMN Details). First I will discuss some of the general principles.
Before a system can be implemented or modelled the system must be able to be represented in some form. There must be some finite set of state variables that describes the existential state and causal state of the system. If there exists a set of states that completely characterises the system then I call it ‘representable’ and I also say that there were ‘sufficient’ state variables.
Are all systems representable? Can all systems be characterised by some finite set of state variables? General relativity suggests that all dynamical values are finite in value; i.e. there is no infinite velocity or energy and so on. Quantum physics suggests that the values of all observable states are quantised; e.g. energy comes in discrete packets. System theory also suggests that all systems have a finite set of states that they may occupy. Finite state values imply finite range of states and discrete state values imply finite resolution or density of state values within the finite range. Hence there are a finite number of distinct states that the system can occupy (finite variation). Hence the system can be modelled within a finite state space as a finite computational process. But the proof that all systems are finite, discrete and representable is a complex issue.
The complete representation of a system requires both existential and causal states, i.e. what exists and what happens. The full characterisation of a system is equivalent to virtual creation, hence sufficient emulation is equivalent to implementation. If a representable system can be completely characterised by a simulation, then because the equivalent simulation program is a computational process, that system is conceptually equivalent to a computational process. Consider, to what degree does a word processing program actually implement a word processor and to what degree does it just model or emulate a word processor. If the emulation is sufficiently complete it can then be considered to be an implementation.
Any representable system can be modelled by SMN. SMN utilises a state vector (SV) that describes the existential state and a system matrix (SM) that describes the causal state. The SV describes the existential state and the SM describes a network of interaction channels that connect these states. The current existential state is input into the causal network that then determines the next existential state. Thus the SV defines what exists and the SM defines what happens.
The SMN model can be thought of in terms of a network of inter-connected systems. An SV element represents the state of the system, a row is the input interface for the system and the intersecting column is the output interface. This is useful for engineering purposes but in this discussion we will look at it slightly differently.
The SV and SM can be thought of as a set of quantities and a set of information channels between quantities. E.g. consider a set of reservoirs with a certain quantity of fluid and a set of pipes that connect these reservoirs, and each pipe has a particular capacity.
The above model consists of a 4x4 SM and two SV’s. The left SV1 is the initial existential state and the right SV2 is the next existential state. Notice that the SM multiplied by SV1 results in SV2. Notice also that the columns of the SM all add to one, and both SV’s add to thirteen. The normalised SM columns mean that the quantity is definitely distributed somewhere but it is under determined (probabilistic) as to exactly where. The operation of the matrix on the state vector is to distribute the quantities through the pipes and thereby move the quantities around the network of reservoirs. The total quantity of substance is conserved and is just distributed by the network of channels. This example illustrates the general inter-connectivity and distribution functions of SMN.
However SMN is a general class of algorithms, depending on how the system is represented and how the model is structured. These lead to two main representational methods; algorithmic (classical) and permutation space (quantum). The algorithmic approach represents distinct empirical states and distinct algorithmic relations between these states, whereas the state space method represents a probability distribution over the set of all possible states (permutations) and the transitions between them.
Aside from classical and quantum representation there are also two general types of modelling methods, ergodic and non-ergodic.
Ergodicity implies that the events that make up a process have a constant relation to each other. I.e. if in some state there is a 100% probability that this state is followed by another state then that relation will apply in all such instances. For an ergodic system the causal programming is static – i.e. the elements in the SM do not change. An example of an ergodic process is traditional software, where the program itself does not change over time.
In non-ergodic SMN the SM elements can also evolve, thus the causal programming can evolve. Thus, both “that which exists” and “that which happens” can both evolve over time. Non-ergodic SMN is discussed briefly on the website (Non-Ergodic SMN). It is also covered in more detail later on in relation to the discussion on determinism, although that is still in development.
CESMN is the most computationally simple method, which has practical engineering applications. It has been implemented in software to illustrate its functionality as a general information processor.
QNSMN is the fully general and most powerful method, which is used in metaphysical analyses. It is the computational process that is proposed in this discussion to be a complete virtual-reality generative process that can simulate any system or process. QESMN is adequate for addressing the first issue of the computational nature of reality but the full QNSMN is required to address the issue of determinism.
These four variations will be discussed below. The relation between classical and quantum representation illustrates the limitations of the algorithmic approach and the power of the permutation space approach. The relation between ergodic and non-ergodic systems illustrates the nature of determinism and its complexities.
Algorithmic representation is possible when there is some set of equations or laws that define the relations between the classical existential states. This is the traditional method of mathematical science and in the context of SMN it is referred to as Classical SMN (C_SMN).
This type of SMN operates on distinct empirical values that are directly represented. These values are transformed according to algorithmic relations between the variables.
See here for an example of A Simple Classical Particle implemented using classical SMN methods.
Notice that in this type of modelling the inner product of each row with the SV corresponds to a process that determines the next value of a particular state variable.
E.g. for a system with two state variables x and y:
x’ = fx(x, y)
y’ = fy(x, y)
Where ‘f’ indicates any computational process that takes x and y as input and produces a new value for either x or y. This need not be a simple mathematical equation such as for the simple classical particle, it can be any arbitrary process that takes the SV as input and produces a single appropriate state value. The row can be generalised as a program that operates on the SV and produces a single state value. In this way SMN can accommodate any arbitrary functions of the SV. Furthermore, as well as arbitrary functions in the SM, the SV contains arbitrary empirical data that is directly represented. This empirical data can be any data whatsoever.
This method can be used to implement models of classical algorithmic systems, which includes the class of all software programs, all traditional engineering applications and all system models of classical systems. Thus it has broad engineering applications.
The coherent unified mathematical foundation opens up the possibility of using mathematical methods to operate on the system models. This could lead to advanced analysis and simulation methods, some of which have already been identified, such as:
The permutation space representation method is a more complete method, which makes it far more computationally demanding. Rather than represent particular empirical values it represents a probability distribution over all possible empirical values. Hence it is described as a quantum approach (Q_SMN). The permutation space is a state space where each point represents a particular state of the entire system. The permutation space maps the entire range of conceivable configurations of the system. The dynamics of the system is represented as transitions between states thus there is also a probability mapping over the space of all state transitions.
The SV is an actualisation probability distribution (a.p.d) over the field of all possible existential states, (What exists, existential state). This is also known as a ‘wavefunction’ in quantum physics. The a.p.d over the range of all possible states describes the potential for actualisation for each possible state.
The SM is a state transition probability distribution (s.t.p.d) over the field of all possible transitions between existential states, (What happens, causal state). Thus the s.t.p.d over the range of all possible state transitions describes the likelihood of any transition taking place.
This approach is similar to the earlier example of reservoirs containing some kind of fluid that are connected by pipes of different capacities. In the current case the reservoirs are quantum states (SV elements), the fluid is probability of actualisation (SV data), the pipes are state transitions (SM elements) and the pipe capacities are state transition probabilities (SM data).
In the NAND and XOR example the system matrix has the form:
Where, for example, the top row of the SM indicates that if the system is in state (11) then the next state will definitely be (00). This is a very simple permutation space mapping but the SM elements may contain any probability values so long as the columns sum to one. Thus another possible state transition mapping that defines a far more complex two qbit system is:
This quantum example has similar structure to the previous classical example but instead of the transitions being definite they are just most likely. Thus this system would exhibit behaviour similar to the previous system but it would also have more complex behaviours.
In the classical case the variables are represented by distinct values. This is equivalent to a collapsed p.d. For example, if ‘b’ is a binary variable then b=0 is equivalent to [p(b=0), p(b=1)] = [1, 0]. Thus there is a probability of one that b=0 and a probability of zero that b=1.
Thus a classical system is just a special case of a quantum system and for each classical system there is a corresponding quantum model, but the space of quantum models is much greater than the space of classical models so not every quantum model corresponds to a classical model.
In both the SV and SM:
Classical distribution = focused, localised distribution
Quantum distribution = blurred, non-localised distribution
SM is a s.t.p.d over a permutation space so SM elements are all probabilities and each column is normalised to one (cosmos definitely observes each state in some manner). The SV is also normalised to one (empirical states definitely exist in some form).
For any reasonably complex system there is a massive state space of possible permutations. The direct mapping of these permutations provides full generality and no algorithmic reliance. It is not a humanly practical method for simulating complex systems but is a definite finite (although vast) computational process that can simulate any complex system. The vastness of the process indicates the true complexity of reality where even seemingly simple processes or phenomena can have significantly complex underlying state spaces.
The classical and quantum SMN methods can be combined in a single SMN model. Thus there may be quantum systems computing the state space mapping of a system and producing an a.p.d over the range of all possible states. This can then be collapsed via a random process (wavefunction collapse) that results in only one classical state (collapsed a.p.d). This can then be represented as a distinct classical state variable and operated upon by classical systems that apply algorithmic transformations to the classical state.
A related example is A Two Tiered NAND Gate. It does not use classical systems but it does transform into and out of the permutation space. The variables are transported as individual a.p.d’s, which are then merged into a permutation space where the collective state evolves and is then separated back into individual a.p.d’s.
For law-based systems there are definite algorithmic principles (laws, equations or algorithms) that determine the nature of the system hence classical representation (C_SMN) can be used.
It is generally accepted that law-based (algorithmic) systems are conceptually equivalent to computational processes so this is a good place to begin the analysis of general processes.
The SMN representation of law-based systems is described above in the section on Algorithmic Representation.
The example of the law-based system (NAND and XOR system) illustrates an SMN model of a law-based system. Recall the law-based SMN model from above:
Or the example of a 2 bit binary incrementer with overflow:
00 ® 01 ® 10 ® 11 ® 00
0 ® 1 ® 2 ® 3 ® 0
Notice that the SM’s are very simple. This a general property of the SM’s of law-based systems. The SM is a probability distribution over the space of all possible state transitions. This can in general be very complex, but if there is some general law that describes the behaviour of the system in a compact form then the SM must contain some order or algorithmic symmetry.
Thus an algorithm is equivalent to an ordered or symmetric mapping over the space of state transitions. It encodes certain general relations between states that produce an ordered SM.
But the space of all possible SM’s is far greater than the space of ordered SM’s. If one takes a law-based ordered SM and changes an element in an arbitrary way then that system can no longer be represented by the same algorithm and it is most likely that it cannot be represented in any compact algorithmic form.
Therefore algorithmic methods are adequate to represent law-based systems but permutation space methods provide full generality and are adequate to represent general systems. Thus the space of broad computational processes (general processes) is greater than the space of algorithmic processes.
For non-law based systems there is no algorithmic representation and one must use the more general method of permutation space representation. The system has no symmetry or order for an algorithm to capture hence the SM is described as non-symmetric or disordered.
Recall the SM from the earlier NAND and XOR system that is similar to the law-based SM but which also contains non-collapsed probability distributions. This is shown again below.
Through comparison with the simple symmetric law based SM above it should be clear that this SMN model represents an actual process but this process cannot be described by any simple algorithm.
General systems cannot be represented by algorithmic methods but they can be represented by permutation space methods. Hence algorithmic computational processes are inadequate to model reality but permutation space methods are adequate. The empirical system may not be able to be represented by a simple law based algorithm but it can still be represented by permutation space methods. Thus broad computation is more complex than a rule based process.
The transcendent process (SMN algorithm) is a narrow computational process (deterministic algorithm) that can perform permutation space modelling of systems. Hence SMN is a narrow computational process that can represent arbitrary processes (broad computation). In this way SMN can represent general processes even though it is itself an algorithmic process.
The SMN algorithm itself and the SM and SV are transcendent ‘machinery’ that creates the potential for empirical systems to exist and interact. They create an undefined empirical space in which any conceivable state can exist and any conceivable event can happen. There is no intrinsic algorithmic programming that SMN imposes on the empirical space. The space arises and functions based upon an underlying algorithm but the space itself can be totally devoid of any influence from this. The machinery just creates the potential for empirical existence but it does not influence in any way what exists or happens.
For example, if the SM and SV elements are all zeros, this is a model of a universe in which nothing exists and nothing happens – this is non-existence. If the SM and SV are filled with uniform probability distributions then this is a universe within which something exists and happens but that something could be anything – this is completely undefined existence. If the SM and SV are filled with non-uniform p.d’s then this is a universe when some things are more likely than others but things are not completely defined – this is under-defined existence (quantum realm). If the SM and SV are collapsed into a classical configuration then this is a universe within which things are completely defined (classical realm).
The fact that SMN can create universes where nothing exists or happens through to universes where everything that exists and happens is completely defined indicates that SMN itself operates in the ‘background’ and provides the possibility for empirical existence without in any way conditioning empirical existence to take on a particular form or to behave in a particular way. Within the empirical space the simulated universe can conceivably take on any form.
Hence the transcendent process is a narrow computational process but it manifests a broad computational context in which any arbitrary empirical system can be represented. Hence broad computation has a narrow computational basis and any process is conceptually equivalent to the broad computational model plus the narrow computational transcendent process that implements it.
Does a computational metaphysics imply a deterministic (clockwork) universe? Is everything pre-determined?
(I propose No)
In this section I introduce the concept of networks of non-ergodic systems. This leads to the concept of deterministic self-determinism. I first discuss types of determinism and then discuss ergodic systems. This is then developed further in a discussion on non-ergodic systems and ways of implementing and representing them. These are conceptually formed into a self-reflexive network that is self-determining. Then I discuss some more details on the issue of what actually determines the evolution of the universe and in what ways this impacts on our experience of existence. This section is still a work in progress and is released as an outline of the general conceptual argument, although some of the mathematical details are still in development.
There are several aspects to the concept of determinism.
All the systems illustrated so far have been ergodic systems. Ergodic processes are processes with static probability distributions; hence the existential ‘program’ does not change. In SMN this corresponds to the SM elements having constant values. The state vector can change thus representing a changing existential state, but the SM remains constant hence the causal state remains constant.
Ergodic systems are fully pre-determined. The causal programming (that which happens) is pre-defined. The network of causal interaction channels between systems is static. Thus systems remain in the same functional relations to each other.
A non-ergodic system is one in which the causal process can also change. This changes the causal structure of the empirical space. Hence both what exists and what happens can evolve over time. There are many complexities in this situation, some of which are discussed below.
Consider a situation where there is a direct process that is the causal program but there is also a meta-process that controls the direct process. This is possible because the process is an active phenomenon in one respect but in another respect it is a ‘program’, i.e. data that determines the nature of the process. If this causal data changes then the causal process also changes.
This SMN model implements a non-ergodic system. The SV is the same as before. The SM is different in the sense that the values contained by the matrix elements are now contained in the SMV and the SM elements are now references to these. The SMV contains the actual states of the SM elements. The SMM defines a process that controls the SMV.
The SV is the existential data. The SM is the causal process. The SMV defines the structure of the causal process. The SMM observes the state of the causal process and the existential state and modifies the SMV and thereby the causal process.
See Non-Ergodic SMN for another description.
One could also implement SMN within SMN and just use a single ergodic matrix with the non-ergodic systems represented within the empirical SMN process.
This phenomenon of levels of control is related to the
concept of direct and meta processes. The
direct process is the primary process that implements the base level
functionality of the system and there may also be other processes that monitor
this direct process. For example, consider a control system program. It is
aware and responsive to the environment that is available to it through its
input and output interfaces, but it is not aware of anything else such as its
own operational state or the meaning of its actions and so on. It has no feedback
loops and is simply a direct pipeline for information transformation, hence it
is described as the direct process.
If there was another meta-process operating that was a control system that monitored the direct control system, then whilst the direct process is aware of the environment via the direct interfaces, the meta-system is aware of the direct process via its own interfaces. A system can only be 'aware' via its interfaces. If there is no interface then no information can flow and no computation or ‘awareness’ is possible.
At first these meta-systems can be thought of as forming a stack of successively higher-level feedback loops.
These diagrams are a shorthand for SMN processes. The external squares represent SV data, which represent either the state of an empirical system (existential data) or the state of a process (causal data). An internal square represents the static SM elements that implement an ergodic process. An ellipse represents a process or a region of the SM. A square joined to an ellipse represents the causal data (program) that implements the process. An arrow indicates the flow of information.
For example, in the ergodic case, there is existential data and an ergodic process that implements the causal process. In the non-ergodic case there is existential data, a programmable causal process and an ergodic process that programs the causal process.
In the non-ergodic case there is non-ergodicity of the causal process but there is still an ergodic foundation. This is the case for all such ‘stacks’ of processes; there is a static program that controls the program that controls the program that … etc.
It is more realistic to think of a complex network rather than a stack. This gives a system that is similar to a neural network. So consider the situation of a complex network of meta-systems, each of which monitors numerous of the other systems. The total system would not only implement causal dynamics in a direct manner, it would also be able to program its state of functioning.
If there is no “stack end” then there is no ergodic process. Every system programs other systems and every system is programmed by other systems. Thus there is no static program underlying the context and it is entirely self-programmable.
In this sense systems are able to determine their existential and causal states according to their will. Their will is also a deterministic process but the process may be self-determined. Hence the universe of systems is completely flexible and there are no truly static deterministic influences.
Consider the scenario of a nation and its governance. All of its actions as a whole are determined by the nature of its context, the nature of matter, human nature, the nature of ecological systems and technological systems and bureaucratic systems and so on. But if it is a ‘free’ nation it can exercise self-determinism. For example, laws determine many of the societal processes but these laws are able to be changed and the process by which laws are changed can itself be changed and so on. Although the societal processes determine the nature of the society these processes are able to be determined by the society, hence it experiences freedom.
Consider the scenario of an individual addict. They may quit whenever they wish but the drug controls their will to some degree. Their internal decisions are driven by the addiction and in order to exert free will they need to re-program their will. They need to exercise self-determinism to overcome the determinism imposed by the drug. It is not a matter of a disembodied “free will” trapped within a deterministic body; the entire situation can be re-programmed.
Other factors that influence the experience of determinism or free will are the chaotic nature of complex non-linear systems, they are indeterminate and exhibit extreme sensitivity to conditions thus a small perturbation can have a large influence. Furthermore, at the quantum level the state of the empirical universe is under-determined (probabilistic) and the process of determining the classical state is purely random.
If one visualises a process as a trajectory through the permutation space, then self-determinism means that the trajectories are always changing and they may attain any conceivable configuration over time. The chaos means that the trajectories are very complex and tangled, a small deviation in state at one point may send the system onto a trajectory that leads it far away from where it would have gone. The probabilistic under-determinism means that there are no distinct trajectories but only blurred regions of interconnectedness. All of these lead to a very dynamic, complex and flexible state space.
It could be that the existential context described above is a deterministic process that can be driven by free will, much like how a car is a deterministic process that can be driven according to human will. Or perhaps there is no true “free will” as it is traditionally conceived of – perhaps there is only deterministic self-determinism and this leads us to experience free will.
The common concept of “free will” rests upon the supposition of a non-determined process, i.e. a process that has no program that structures it but which is determined by the momentary whim of a ‘free’ agent. This is represented as an empty ellipse in the diagrammatic language described above since there is no program data at all. Such a process can impose decisions that rely upon no defined decision making process. In my mind such a system is a myth that has no correspondence with real systems. I cannot conceive of how such a process could operate without any operational guidelines. It seems far more likely to me that all processes are determined but they are also self-determined, and it is from this self-determinism that the experience of ‘freedom’ arises and thus leads us to conceive of “free will”.
What determines the evolution of the whole system?
If the universe consists of an ergodic stack then there is an end to the stack and therefore a deterministic foundation, thus the universe follows determined trajectories through state space. If there is a non-ergodic network then there is self-reflexivity and the universe follows a self-determined trajectory.
To what extent can the universe be said to follow trajectories and to what degree do these trajectories determine the nature of the system?
Simple classical systems with collapsed p.d’s are represented as a distinct point on a distinct trajectory. This leads to a landscape of classical trajectories in the permutation space and there is no intersection of paths. Thus the progression of any system depends upon the initial configuration and the system cannot deviate from its pre-determined trajectory.
If the system exhibits chaotic behaviour there is a complex “strange attractor” in the permutation space. Hence the system exhibits extreme sensitivity to conditions and even a slight change in the initial conditions can lead to vast differences in later conditions. Thus the trajectories are closely packed and entangled in some sense.
If there is a stochastic or random stage in the process such as wavefunction collapse, this allows for path wandering. The system is no longer represented as a distinct point but a distribution over a range of points and this means it partially traverses several paths and can collapse into any one of these. This leads to a permutation space with a blurred distribution rather than distinct trajectories.
The permutation space of a network of self-reflexive systems not only describes all existential permutations but also all causal permutations; hence it describes the complete space of cosmic variability. In encompasses the space of all states of being, all states of doing and all variations of being and doing – hence it can represent any conceivable state of the universe.
From what initial conditions could the simulation have started?
If it started from a particular collapsed (classical) SV that describes a particular cosmic configuration, then the universe will follow a particular path and may evolve from that path and can eventually attain any conceivable state. If it starts from a particular quantum SV that describes a particular distribution of cosmic configurations, then the universe will follow a particular range of paths and can evolve from there. If it starts form a uniform probability distribution then all possibilities are explored. For example, consider the NAND and XOR example above. If the initial SV is a uniform p.d then:
Thus in the first moment the state is completely undefined but it evolves from there into states that are characteristic of the causal structure of the space.
These probabilistic approaches are related to the idea of a quantum multiverse where each possibility is an actuality in some ‘parallel’ universe. Hence all conceivable states of existence are explored and experienced. There is no actual collapse of the wavefunction into a single actuality; instead of just one possibility being explored all possibilities are explored.
If every possibility is explored then there is no need for free will to decide what is experienced because everything is experienced. The question “Why do we experience this particular universe?” could then only be answered by the anthropic principle – because this just happens to be the one in which these particular experiences are manifesting. If we occupied another universe in the multiverse then we would experience that instead of this one.
If not every possibility is an actuality then free will or self-determinism can exert influence over which universes become actual and are thereby experienced. By influencing the probabilities in the permutation space self-reflexive systems can influence the state of what exists and what happens.
All systems are representable and any representable system can be represented by permutation space approaches such as SMN.
SMN is a computational process therefore any process is equivalent to a computational process.
SMN is a transcendent process that creates an empirical space of pure potential existence.
SMN can model self-determined systems without static programming.
The systems can be chaotic, stochastic, self-configuring, coherent systems with very complex state spaces.
The empirical space is completely self-programmable and can represent any conceivable state of what ‘exists’ and ‘happens’ in an empirical universe.
This scenario may constitute a deterministic context that can be ‘driven’ by a non-determined “free will”, or it may be that systems are self-determining, or both of these may be anthropic perspectives on a multiverse.