Finite Discrete Closed Information Systems

See also:
The Mathematical Analysis,


Elsewhere I have develop the concept of an FDIS, which is a particular type of Finite State Automata (FSA) interpreted as a system model. Here we perform a group theoretic analysis of an FDIS as a model of a dynamical system or as a dynamical group. In order to form a group it must meet certain conditions that impose further constraints on the nature of the empirical dynamics that can arise within the simulation space. These constraints result in a dynamical regime that is in many ways remarkably like our own physical universe.

Group Theoretic Analysis of FDCIS's

There are intricate and subtle connections between the concepts of groups, algebra, information spaces, system state spaces, and so on. Here we map out the connection between system models and groups, thereby between causal programming and algebra.

Here we only consider finite groups, i.e. the set of integers is an infinite group but the set of all possible states within a finite discrete state space is a finite group. Thus the maximum size of the state space depends upon the underlying primitive state variables and their combined requisite variety. This then gives a finite bound on the number of possible states, which also constrains the largest possible state transition in a single iteration and hence the largest exponent of the system matrix required to implement that transition. These constrain the duration of the longest period cyclic process that can be simulated and also the temporal resolution of the simulation. In general the longest cycle in a particular system is less than the absolute limit and this defines a tighter upper limit on the total number of states N and the largest matrix exponent, thus on the size of the group required to represent that system.

Consider the set  where M0 is the identity matrix and M1 is an ergodic system matrix that corresponds to a particular type of simulation system or empirical cosmos, defining the types of behaviours that are possible therein. The higher powers are simply the SM raised to higher powers, thus representing all possible state transitions; this represents the transcendent context of the computational process.

Also consider the sequence  where each vt is a state vector that represents the entire state of a simulation universe frozen in a single moment of time. The vector v0 defines a particular starting condition which then defines all later states. These form a time sequence that represents the empirical context of the computational process or the space within which empirical states transform over time thus producing the empirical world. It is a sequence of column vectors thus forming a two dimensional matrix that is indexed by state or by time. A column vector is a state vector, which is a cosmos frozen in a single moment in time, it is like a frame in a movie. A row is a time series for a particular state, thus it is a signal with amplitude as a function of time, it is like a beam of light from the movie projector.

This set is conceptually equivalent to the “Akashic Record” from esoteric metaphysics, it is a representation of an entire universe from beginning to end. Whilst the FDIS or finite state automata operates only within a single transcendent computational moment with no past or future, the virtual world that it 'simulates' must be a whole and complete process that is 'closed' in a group theoretic sense from beginning to end. Thus all past, present and future states are represented within this set even though in the absolute reality in which the information is actually being computed there is only the ever changing NOW. This explains how it could be that in some cases phenomena seem to be “spread out” in time with past and future interacting, whilst in other ways there is only the present moment that exists.

The group formed by  contains an FDIS, its higher powers and the set of all associated state vectors. These are considered as a dynamical group; i.e. as a class of mathematical entities and relations that produces an algebra that implements the causal structure of a dynamical system, and also an information space that is the whole computational space that embodies both the transcendent and empirical contexts (or SM's and SV's).

The empirical context is otherwise known as the physical universe, it is the domain of traditional physics; it is the domain of relations between empirical states as functions of time, thus there are patterns within the matrix of the empirical space. However, this empirical space is only where the phenomena manifest, it is not the cause of the phenomena. SMN takes the causal structure deeper, where the empirical space is a projection and the casual structure is explicitly represented within an information space manifested by a system matrix. Thus, from an empirical perspective, the causal structure of the physical universe is represented transcendently, as underlying causal programming. Thus the empirical and transcendent contexts are defined, in terms of the SV and SM spaces respectively. Our senses respond only to the empirical states, but sayings such as “look within” or “look behind the veil” or “penetrate the outer form and see into the essence” or “see God / Brahman within all things” these all mean to use the intellect and the intuition to see into the SM side of the group and comprehend the causal structures behind and within all things or all empirical phenomena or all patterns within the empirical space.

Consider now the set of all possible state vectors, i.e. all conceivable permutations of the underlying state variables. This is the set  which is the complete state space. The set (M, N)is a set of operators that transform any system state to any system state within S by traversing the causal pathways of the system as defined by M, thus the system may move about in the state space. However, by using a different matrix M a different set of causal pathways form within the state space hence there is a different phase portrait, describing a different behavioural topology and thereby implementing a different system.

Groups Must Be Closed

The group must be closed in every sense, mathematically as well as in the context of the system's causal structure and behaviour, i.e. not just in the flow of symbols but also in the logical meaning of their flow. Any deviation from closure and the system can crash, i.e. it can enter a non-computable state and the next state of the simulation is then undefined, this would bring the empirical universe to an unexpected end. If the group is closed in a group sense then every state transition results in a valid state so one moment can continue to follow another indefinitely. The causal algebra must be invested with symbolic meaning and woven into a simulator that is behaviourally closed, in that it does not simulate paradoxical behaviour but instead “the simulation cosmos rests comfortably within the fundamental constraints without touching them”. For example, rather than loop a spatial coordinate so that one jumps from one side to another as one crosses the limit, like in many computer games, one can use relativistic constraints programmed into the universe at a low level, thus producing a finite spatial universe within which one can never directly experience the finite limit. Thus it is finite but unlimited.

Whilst the cosmic FDIS programming must be closed in order to function as a group or an information space the whole cosmic process may or may not be closed, there may be a higher world that we can interact with but nothing can be said about that from the perspective of this theory, other than that, there is no way of knowing unless we are informed about it by that higher world.

The way in which dynamical groups remain closed with a finite number of empirical states is that the dynamics must all be cyclic thus any SM can be applied to any SV thereby producing a valid SV. Thus  and  . Even a chaotic system orbiting a strange attractor will be cyclic within a finite discrete regime because at a given resolution there will at some stage be two points that become indistinguishable due to quantisation entropy. Since the future state of an FSA depends solely on the present state and this present state has been experienced before, this loops back thus forming a cycle within state space.

Group Requirements

Group Closure:

So long as  and  the process is cyclic and only valid states arise, thus the group is closed.

Associative:

  and  and  

Unit Element:

M0 = I the unit matrix hence  

Inverse Element:

Due to the cyclic nature of the process  so  which moves the simulation j steps backwards. Furthermore, these matrices are generally hermitian so their transpose conjugate is also their inverse; the transpose makes every input an output and every output an input, thus reversing the flow of information, then the conjugation reverses the sign of the phase component thus reversing the direction of phase evolution or dynamical simulation. Thus the signals flow along reversed channels and the signals themselves are reversed in time, thus the entire scenario is reversed.

Abelian (commutation):

  so (M, v, N) is a non-abelian group,

but  so (M, v, N) is an abelian group under operator composition. Thus operators can be combined in any order thereby producing another operator within that group.

Determinism and Noise

A finite discrete FSA in a purely deterministic regime would remain trapped in particular cycles and would not explore or utilise most of its state space or its behavioural potential. For engineered systems this is preferred so that neatly defined cycles can be formed into networks of integrated cycles, and noise is minimised to keep the systems bound within their deterministic cycles. For general systems a degree of noise is optimal, to allow the system to skip between nearby state space trajectories and thereby fully occupy its whole state space and explore its full range of potential behaviour. A zero noise condition is a sub-optimal operating condition for any complex dynamical system. For example, the experience of modern sanitised lifestyles and immune system dysfunctions indicates that a little bacteria would be good noise. Furthermore, a certain degree of noise is required at the low level to allow for the propagation of subtle signals via the phenomenon of stochastic resonance.

The effects on a system due to noise are analogous to states of mind, the broad, open, brainstorming and diverging state of mind is a noisy, fuzzy FSA that rambles through a broad state space and the narrow, focused, converging state of mind is a precise and finely tuned FSA that follows well defined tracks. Indeed consciousness can be defined as the ability to recognise and break out of loops, thus a purely noiseless deterministic system would remain trapped in loops and could not manifest the potential for consciousness, whereas a system with optimal noise would be able to traverse any loops and retain its freedom of movement within state space or its consciousness.

Finite Discrete Closed Dynamics

Here we are discussing the details of the simulation programming that underlies an empirical context or phenomenal world. We must ensure numerical closure of the data types within the programming, this is different to the group closure discussed earlier. This produces a dynamical simulation program that is completely fault tolerant and cannot crash for any internal reason.

Numerical Closure

In a variable dynamical regime closure is ensured by the absence of underflow or overflow in any of the dynamical equations. Thus for each equation and each variable one must check all extremum cases to ensure that no data loss or corruption occurs.

Below I use finite discrete notation. If you haven't already familiarised yourself with this then refer to here.

Thus if, for example x = v.t then we must ensure that either dx ≤ dv.dt. Or another method is to ensure both dx ≤ dv.Dt and dx ≤ Dv.dt where the variables v and t are interrelated via the programming so that values less than dx do not arise, thus making values such as dv.dt impossible.

See here for a discussion of numerical closure and non-entropic computation that loses no information and that cannot crash. The basic principles are equal requisite variety, no underflow and no overflow.

Dynamical Constraints

These last two requirements place dynamical constraints on the range and resolution of empirical quantities and thereby on the nature of the empirical universe that is being computed. A constraint being broken is equivalent to a computer program incurring a “divide by zero” error and crashing or the simulation data becoming corrupted over time and nonsensical phenomena occurring. But this universe cannot crash and it remains coherent throughout time.

These considerations apply to the efficient and harmonious long term functioning of any complex dynamical system and thus to the issue of systemic health. If all information channels are coherent and no information is lost then the processes can persist indefinitely.

A simple way for the physical space to be closed is for it to loop around on itself but this is not the case, as has been recently discovered by scientists (this universe is spatially flat). Besides, one cannot loop energy or velocity values, however phase is looped into a circle of 2π radians.

Instead, the constraints are built into the programming of the (Simulator), just as in a computer it is up to the program not to ask for an illegal memory address so too is it up to the Simulator to not generate empirical values that cannot be represented by the transcendent computational infrastructure (the TC).

Constraints Due to Gravitational Potential Energy

For instance, gravitational potential energy and underflow place constraints on the largest physical distance between any two objects, whereas overflow places constraints on their smallest separation.

  so  thus   defines a maximum separation for the lightest particles but in general the maximum separation depends on the two masses multiplied by  . These constraints imply that the empirical universe is topologically flat and finite in extent, and even has a center or origin defined as the center of mass of the physical universe, thus there is a fundamental inertial reference frame. Furthermore, the distance that one can travel from the center is proportional to ones mass with only the lightest particles out near the edge of the universe.

Also  so  and since  we find that  hence  and this is related to the Planck mass that is  which is different by a factor of . I'm not yet sure what the factor means, it is likely to be related to the correlation between the planck context and the (0)context of the cyclic model. Note, in the cyclic model Lφ = 2π .

All of the Planck level quantities can be now be derived; for example, by substituting  into  we get  and this is related to the planck energy  and so on for the other planck constants such as  and  and  and  .

Iterative Constraints

Each iteration is a whole computation, it takes in the state of the entire cosmos and from that computes the next state. Therefore quantities that apply over multiple steps need not be represented or even representable. For example, consider a particular velocity   , there may be another velocity    where   . However if Δt is the simulation time step for (0)cycle computations, then Lt cannot occur within one single computation so a FD variable capable of storing v' need not be used. Hence the FD constraints are modified by these iterative constraints.

A further iterative constraint is that if x' = x.y is iterated and x is to remain bounded, then x.x* ≤ 1 and y.y* ≤ 1. So x and y are complex probabilities.

Simulation Programming

Below is a calculation for quanta or photons that maps out all of the limits imposed by numerical closure. It is easiest to represent as a conceptual network for clarity. The calculation begins at the top, maps out the structure of the dynamical algebra and at the bottom we see that there is only one velocity, i.e. the speed of light.

  


Furthermore, if  and  then  (equal requisite variety, thus equivalent information spaces).

If  then we get a set of dynamical constants that correspond with the Planck scale of our physical universe, this is elaborated on shortly. If there are more energies or frequencies allowed then the quanta can have varying energies and  and all the dynamics occur safely within the underlying constraints.

Dynamical Ceiling

The Planck level of our physical universe provides a full set of dynamical quantities with just the right relations to produce a simulator that meets the requirements of a finite discrete closed information system.

The actual state data that is being computed by SMN is of the form   and it has a modulus and 1, 3 or 7 phase components see here for more detail. These are the most low level components of the simulation programming and this low level of implementation is extremely fine tuned and cut down to the simplest possible structure. The values Δφ, Δω and Δt are all constants so    and    so zero bits are required to represent the different values because there is only one value, therefore the corresponding FD data structure is simply {tp} instead of {nt , dt} or {tmin , nt , dt}, and so on for the other variables; this is far more compact and simple. In this context we will dispense with the dt and Lt notation and simply use tp or xp and so on, where the p subscript indicates that these are in fact Planck values.

In the discussion on “cyclic computational processes” it turned out that the primitive period T0 = tp so the Planck context is equivalent to the (0)context of the cyclic computational model. So one should refer to this for further details regarding many aspects of the Simulator's programming.

Within this context all dynamical quantities have constant values. This is as well as the constant speed of light   , which in this context is the only allowable speed. Furthermore, there is the usual range of dynamical variables and equations such as   all of which involve the Planck constants woven together into a dynamical algebra or program.

These various dynamical quantities collectively define the dynamical properties of the (0)context and since they are all constants they are trivially finite and discrete so there can be no entropy in this context.

  Fundamental iteration time step.

  Fundamental cycle frequency.

  Fundamental spatial displacement.

  Momentum of fundamental quanta.

  Energy of fundamental quanta.

  Effective mass of fundamental quanta.

So the fundamental iterative cycle has a period tp which represents a quanta with maximum energy and all the above dynamical properties. On the level of the simulation program there is just an iterative complex equation with evolving phase but from within the simulation there is a quanta with a range of dynamical properties.

In this sense the (0)context is the inner face of the simulation program and is the furthest or deepest that we may perceive in any empirical sense; it is the quantum vacuum. It is the lowest level of the transcendent computational machinery, which is related to the Akashic Field and it forms a high energy ceiling to our empirical universe. In this context there are no variations of any of these values, this forms a static framework from which the causal structure of the empirical universe depends.

Variable Dynamical Regimes

Next we explore the domain of multiple iterations and of variable valued dynamical quantities, where velocity can be less than c and the energy less than Ep and displacements less than xp and so on. This is the context where  , for details, see the cyclic computational model. Within that context we descend into the series of cycles from the (dn)cycle to the (Ln)cycle and explore the dynamics therein. But more generally we consider an arbitrary dynamical context with variable valued dynamical quantities where all equations are bound by fundamental constraints as illustrated in the conceptual network calculation above and there are many distinct energy states.

Representation of Values

There is much more variety here that must be represented in some way. This can be represented explicitly using FD variables with more requisite variety since within the dynamical group there are SM's that correspond to multiple iteration processes and these can be used to perform calculations of explicit values that can only arise or be computed over multiple iterations. This is the conceptually simpler method and is used, for example, in the preceding analysis.

Or conversely one could retain only the fundamental cyclic processes, this is the structurally simpler method and strictly more correct. Each FSA computation is a whole universal moment so only the constants and the phases are stored and all other values must arise from virtual computations that are implicit within the cyclic processes. These are essentially virtual computational spaces that arise within the cyclic computational process that is driven by the fundamental FSA iterations. Hence their computational values need not be explicitly represented, they are encoded into the underlying framework of the dynamical ceiling using minute phase differences.

An indication of multiple iteration computation is related to the solution of the Schroedinger wave equation   where  is the Hamiltonian, which is the key dynamical energy equation for the system that steps the system forward in time. But  is a matrix, so to calculate an equation with a matrix exponent one uses the taylor series expansion of the exponential function, which is  where the higher powered matrices involve interactions over multiple iterations. Thus, the dynamical group must be considered in regards to all of its SM's, or all possible transitions between system states to determine a particular state. Thus all the SM's in the dynamical group are required to compute this summation, but the factorials in the denominators soon make the higher powered terms negligible so only a few terms need be computed.

General Discussion

The Fundamental Constants

The above connection with the fundamental constants and the requirements of a FDCIS dynamical regime implies that the fundamental constants of this universe are tuned precisely so that they may provide such a dynamical regime within which this empirical universe could arise. Or more likely, they are thus tuned because they arise out of such a dynamical regime and could not manifest a coherent universe otherwise. The connection adds credence to the underlying proposition of the computational paradigm; that this physical universe is indeed a simulation being computed by a finite discrete closed information process.

Akashic Field

The dynamical ceiling or the Simulator/EC membrane or the Zero Point Field or the Planck scale of our physical universe is as far into the underlying computing machinery of our virtual reality that we can explore using purely empiricist techniques. We cannot perceive and measure any deeper; to go deeper requires intellect and inference. However the effects of the deeper computing machinery can be felt since it is that machinery which computes every aspect of our virtual reality so its effects are seen throughout the whole of the empirical universe on all levels. These effects give rise to the idea of an Akashic Field, but this is a way of conceptually grasping the whole of the underlying computational process, it is not one thing but it has a coherent influence so it seems to be one thing from our perspective. Thus the phrase Akashic Field refers to everything above the Simulator/EC membrane, i.e. the Simulator, Transcendent Context (TC) and transcendent process (TP); whilst the phrase physical universe refers solely to the empirical context (EC). For more information regarding these terms refer to the detailed VR analogy.

String Theory?

The most general FDCIS implementation would not use complex data but octonions, which are similar to complex data except that they have one real and seven imaginary components. These are the most general form of data since all other forms can be implemented using octonions (real, imaginary, complex and quaternion). As one goes from real to complex to quaternion to octonion the amount of coherent causal structure within the mathematics diminishes so beyond octonions there is simply not enough structure to produce a coherent group or information space; so these are the most primitive and low level form of information.

As described above, the empirical context is a two dimensional structure indexed by state and time (this is the SV accumulated over all iterations), this is now filled with eight dimensional data giving a ten dimensional empirical universe. The seven imaginary components produce seven independent phase components or seven fundamental dynamical quantities, which could be interpreted as three spatial dimensions and four forces. It is no surprise that String Theory suggests a ten dimensional universe and also relies heavily on the use of octonion data. From within a simulated empirical universe the FDCIS interaction loops could be conceived of as one dimensional filaments (causal strings) that vibrate through ten dimensions and thereby manifest empirical phenomena. However it is far clearer and more intuitive when one conceives of the context as being that of a FDCIS that is structured according to SMN and system theory.

It seems highly likely that these correspondences between FDCIS's and String Theory are not just an accident. If this universe was a product of an FDCIS then the system theoretic principles apply on all levels from the lowest to the highest. Whilst String Theory has been searching for a Theory of Everything, it was really only expected to apply to the lowest possible level and then we would have to extrapolate this enormously to get any sensible understanding of everyday phenomena. However it seems possible that String Theory and System Theory are in fact the same theory, one applied to the lowest level, the other applied to all levels. If this is the case then any progress on the lowest level of String Theory could be translated into high level understanding via System Theory and conversely advancements in System Theory may provide deeper insights into String Theory. In fact, because of the universality of System Theory, all particular scientific domains are sub domains of System Theory so System Theory can act in this way as a cross fertilisation and communication facilitator between all domains and eventually all may be seamlessly unified into a single coherent body of human collective knowledge.

www.Anandavala.info