Cyclic Computation

See also:
The Mathematical Analysis,


In this discussion I use finite discrete notation. If you haven't already familiarised yourself with this then refer to here.

The Cyclic Process

The Primitive Cycle

Consider a fundamental cyclic process represented by   which is iterated according to    thus producing subsequent values of z raised to successively higher powers, hence  . Say it takes A steps to complete one full cycle so in each iteration the phase evolves by discrete steps of  which is the phase quantisation and dt is the simulation time step. When  then one full cycle has been completed. If  then after A iterations the phase has moved through  , so there is a slight phase offset. However we still treat A iterations as one full cycle.

Multiple Cycles

The primitive cycle is called a (0)cycle and is treated as one primitive iteration within a higher level iterative cyclic process, so the (0)cycles are themselves (1)iterations that step the phase around a (1)cycle where  . In this manner we build a model of cycles within cycles where each (n)cycle requires A n iterations to complete and each iteration is in fact a whole lower level cycle (except for the primitive cycles). As the primitive cycle rotates these later cycles emanate or unfold in sequence.

If  then all higher level cycles have  and no dynamics can be simulated via these cycles. It is only when   that there is an inherent phase eccentricity that propagates throughout the cycles causing the phase to evolve within the context of each cycle.

We now have multiple nested temporal contexts so we will use temporal notation, but express all times in their native units for now. Let each primitive iteration have a period of T0 = 1 (0)iteration then  which is equal to A0 (0)iterations or one (1)iteration and in general  and  if A n = A for all n, i.e. if all cycles require the same number of iterations. In this discourse we will consider this latter case.

The phase step for each cycle varies throughout the cycles as  .

An End to the Emanation

Now that we have initiated this cascade of cyclic processes, they unfold with successively longer periods. However each of these cycles is considered to represent a plane-wave or non-localised quanta with well defined momentum within a simulation universe, so the period of the cycle gives a frequency f and an energy E. The (0)cycle defines a maximum energy ceiling out of which unfolds lower energy cycles or quanta.

These cycles unfold until beyond some point the period becomes so long that the corresponding energy is less than the minimum energy representable within the simulation space dE (see discussion on finite discrete information) and quantisation entropy makes it impossible for the lower cycles to represent information within the simulation space. So the last cycle that can take part in an empirical simulation has  (expressed as a number of (0)iterations) where Ln is the largest possible value of n. So  so  is the requisite variety of the cycle-period state variable.

What is Energy?

Energy is a measure of activity or change within the dynamical simulation space and this is represented by the phase component. Consider a single cycle;  so in each iteration there is only amount of dynamical change that can occur within the period of that iteration. The highest energy is  when the phase displacement occurs in a single primitive iteration and the lowest energy is  when the phase displacement takes an entire (Ln)cycle.

So  and  so we get  and  as the requisite variety of the energy state variable, which is equal to K t.

For each cycle there is a period and an energy so there are equal numbers of these values so the requisite variety of each state variable is equal. This is the case for all dynamical variables where there is one value associated with each cycle. Hence Kn = K t = KE = K f = K p... etc.

Canonical Case

Now consider the case where A = 2π which is a 'canonical' case where    and each cycle takes iterations to complete, thus  and the cycles form within a base information space which is fitting for a cyclic process. Furthermore, let us define a unit of time within the simulation space where one sim-second corresponds to one radian of phase evolution within the (Ln)cycle. Hence, by definition, one (Ln)cycle takes sim-seconds (or s-sec's) to complete. Therefore  

Compressed Cyclic Model

Now that we have unfolded these cycles out to some well defined limit, we know the extent of the cyclic simulation process and can effectively compress this whole process of cycles within cycles down into a single cycle by setting  (0)iterations instead of  (where primes indicate old values) so we have taken the longest period cycle and compressed it into one cycle, thereby compressing all the other cycles as well. This also brings the numerical value of  , expressed in terms of (0)iterations into line with  expressed in terms of sim-seconds. We redefine the value of the fundamental period as  so there are effectively now a large number of short period iterations in place of a single previous iteration, and instead of iterations to complete approximately radians, it now takes  iterations. So has changed from  to  which is exact because there is no more need of Δ since now everything has been compressed into one cycle and the whole cycle must be complete in itself so there is no phase eccentricity (it has been built-in).

A Quick Check

Let us consider this briefly and check that it is still equivalent with the original cyclic process.

After iterations of this new process the simulation time has advanced by  where  is the conversion factor from the old time (primed) to the new time (see Conversion Factors for more details) so this is equivalent to iterations of the old regime.

Energy Scaling

In this new regime the highest energy is  and the lowest energy is  , note that we still have  .

In this compressed regime we have dφ = T0 and this defines the maximum energy value as one, however if the inhabitants of the simulation universe were to use their own units of energy then the numerical values of these need to be converted. Say that the inhabitants define their maximum energy value as Ep (they call it the Planck Energy) then  and  since  and  which is the Planck Time and dφ = h, which is Planck's Constant.

Now we have   ,   , dφ = h and T0 = Tp so this brings us into a dynamical context with properties much like that of our physical universe.

Furthermore, here lies the reason why the equations of quantum physics transform into classical equations as h0. Since h is the quanta of empirical action for each iteration, as h0 whilst the simulation dynamics remain unchanged Δt → 0 and Δφ0 and the iteration frequency becomes infinite. Hence all empirical contexts become infinitely detailed and empirical processes become perfectly smooth.

Furthermore, energy can be calculated using the various cycle's n over a full cycle. So we get  and  .

How is phase evolution converted into simulated dynamics?

So far we have cycles with lengthening periods so descending frequency and energy. When interpreted as quanta they are conceived of as waves within a simulation space and therefore have a wave length λ0 that is defined in terms of spatial units within the simulation space. This quantity λ0 is constant for all cycles and is the means by which one translates between transcendent computational quantities and empirical spatial quantities. The various frequencies fn produce various velocities vn where  so these plane waves have a phase velocity less than c. There is also a λ n associated with each cycle and it represents the wavelength of a photon associated with that cycle  so this is equivalent to a photon with energy En < Ep and velocity equal to c.

The cycles represent decreasing phase velocities from  to  and   as expected.

Phase Velocity and Particle Velocity

The velocities defined above are the phase velocities (redefine as vφ) for plane-wave quanta associated with an empirical particle, however the velocity of the particle vp is measured by inhabitants of the empirical universe as twice this velocity. This is because here we are dealing with non-localised plane-wave quanta; their energy or momentum is well defined but their location in space is undefined. To transform this momentum representation into a position representation we must localise the quanta in space. This requires the superposition of a large number of plane-waves with subtly different frequencies and phases. We build up a superposition of these waves using a fourier integral and they interfere and produce a well defined wave packet or localised quanta, which has a group velocity equal to the particle velocity. However now the momentum is undefined. These issues have been discussed at length elsewhere in regards to the general properties of all waves and in particular the Heisenberg Uncertainty Principle where  is a fundamental constraint on our knowledge regarding the empirical properties of these quanta.

This process is reminiscent of a hologram. The particle perspective is a bit like a holographic image that is constructed from the interference of countless waves, and the individual rays associated with individual state vector elements in the FDIS (Finite Discrete Information System) are like elements in a holographic medium. This is discussed further a little later in regards to holographic simulation.

Various Empirical Quantities

At the level of the ceiling c is the maximum velocity within the empirical context. This (0)context defines a dynamical computational regime in which the empirical velocity is v = c so only maximum energy quanta or photons can exist in this context. This context, when considered in conjunction with the SMN implementation of this cyclic model, produces something remarkably like an Akashic Field, which provides a low level inter-connective infrastructure that permeates the whole simulation cosmos.

As one considers multiple iterations one moves into a context in which photons may have different frequencies and particles may have v < c and mass m > 0. Thus we enter the empirical simulation universe proper.

The first fully empirical cycle is associated with the largest empirical energy so it represents the first phase velocity less than c,   but the particle velocity is twice the phase velocity so this cannot alone be associated with a physical particle. However it can represent a high energy photon with  and  or it can be a component in a superposition that produces a wave packet associated with a particle but it represents the ultra high frequency aspects of the particle's evolution and the particle's velocity is dominated by the many lower frequency, lower velocity plane-wave components.

When  this is the phase velocity associated with a quanta that represents the lightest possible particle accelerated as close to the speed of light as it can get and thereby possessing maximum particle momentum.

At the level of the floor, the (L n)cycle represents the lowest empirical energy so it represents either  , which is the minimum phase velocity associated with a particle or the longest wavelength  for the lowest energy empirical photon.

Note that  and  .

The Empirical Universe is a Perceptual Construct

The above model defines aspects of the programming that simulates this empirical universe, however the type of universe that one perceives within the simulation depends upon how one interacts with the underlying cyclic processes and this depends upon the nature of ones perceptual apparatus. The cyclic process forms a common information engine but empirical systems may perceive and interpret the information differently.

At the lowest level a system's perceptual apparatus is characterised by a perceptual frequency where  is the Planck frequency which is the maximum perceptual frequency for any empirical system, this is constrained by the quantisation within the TP itself otherwise we could have  and  for a smoothly varying classical universe with infinite resolution and infinite energy.

The minimum perceptual frequency is  or once per complete cycle.

When pf = Lpf the system can perceive or sample every single iteration so there are  possible values that can be perceived, and the values vary with every primitive iteration so the perceptual system needs to be able to distinguish this many unique values. When pf = dpf the system can only sample once per cycle and the cycles are not eccentric so the value will remain constant and no variation will be perceived.

Different Empirical Worlds

Given a particular empirical system with a particular pf, let us determine the nature of the empirical world that they will perceive. First let us consider a minimum capacity perceptual system with pf = dpf, then we'll consider pf = 1 and then the maximum capacity perceptual system.

If  then   gives a value of  so there is only a single constant cycle or a single type of quanta or a single frequency that is perceptible since  hence constant values. (note: here T0 represents the smallest cycle period that the system can perceive, which is not necessarily the absolute primitive cycle.)

If pf = 1 then  (this is the native T0 unit in this compressed regime) gives a value of  so there is more than one cycle and  . But what is the value of dn?

  so  but   . Combining these two equations we get  where  , hence  .

Now Kφ = Kn and Lφ = 2π so  so it takes approximately 4g iterations to complete a full cycle and there are 4g distinct virtual cycles each representing a particular type of empirical quanta.

Also  and  and   where  and  .

When  then   gives a value of  and  (too small for my calculator to compute in this way)

But KE = Kn so consider  and  and   so  and  and   so  and  however we can rescale these to fit any arbitrary energy scale so we multiply through by Ep to align it with the standard units for empirical energy used by the simulation inhabitants, so  and  .

Is our choice of units arbitrary?

In the above calculation, when we defined the empirical unit of time we considered only the transcendent computational process and defined it such that  , i.e. one radian of transcendent phase evolution corresponds to one second of empirical time.

The exact numerical value of the Planck Time Tp depends on our arbitrary units for energy, length and so on but the actual unit of time that we use is identical to the unit defined in terms of the transcendent computational process. Is this an artefact of some oversight within the calculation or is this indicating something fundamental?

The two underlying principles responsible for our particular numerical measure for the unit of seconds is the system of 24/60/60 time keeping inherited from the Babylonians (I think) and the angular momentum of the planet Earth. The tilt of the axis and the orbit around the Sun combine to modulate this fundamental frequency and produce the seasonal and yearly cycles, however on average the length of a day is determined by the angular momentum of the planet.

Does this indicate that the Babylonians understood much of this, or that our very planet is somehow aligned with a harmonic of the fundamental frequency of the universe? Could these correspondences be related to the fact of our existence here at all? Could sentient life have formed here if these harmonic relations were not satisfied? This is deeply puzzling! Indeed our own heart beats are approximately 60 beats per minute or one second per beat.

Time Dilation

So far we have considered only equations of the type  where  and f varies for each cycle. However if we consider this in terms of a simple harmonic oscillator where  and  where p indicates Planck values, then  and the iterative equation we are dealing with is  . So as the energy increases  and since t is the iteration time step  we get  and the phase component is zero so there is no dynamical computation or one may say the dynamics are timeless. Hence as the energy of a quanta increases, either due to greater mass-energy or kinetic energy, the rate at which it's dynamics proceeds tends to zero. So the fastest rate of time occurs for the lightest and slowest quanta. This produces the phenomenon of relativistic time dilation.

The above discussion describes how a large energy computation results in a very small but recall that  so many iterations are required to effect the appropriate phase displacement associated with that energy so more computational iterations are required to simulate the quanta, hence more computational resources are drawn from the transcendent computer.

A system within a simulation world can draw upon unlimited (but not infinite) resources from the transcendent computer; it may conceivably require a computation that pushes the computer to the limit. External observers of the computer will see it slow down due to congestion, but from within the empirical universe the being can only perceive within the context of particular simulation moments. So even if the computer becomes so loaded that the time to compute one simulation moment changes from milliseconds to days, within the simulation the moments keep flowing in sequence and no difference can be discerned. So different systems in different contexts such as states of motion are observed to exhibit different time dilations due to their different computational loads, however from within their own context things seem unaffected.

Furthermore, one can go through a similar argument as above but keeping the ω.t term constant and using dx as one's simulation spatial step (analogous to a simulation time step) and λ n as a variable wavelength that translates the transcendent computational quantities into equivalent physical distances. One then finds that as the energy increases λ nxp and all empirical systems experience the phenomenon of length contraction.

This then gives us a quantised relativistic empirical context or a physical universe with properties much like our own, both in terms of quantum phenomena and relativistic phenomena.

Holographic Simulation?

How does this analysis integrate with SMN and FDIS's? In this analysis I have essentially been exploring the properties of a single primitive datum as it flows through the computational process or the computational stream (or causal 'string') associated with a single primitive system. This is just one element in the state vector (SV) and one element of the system matrix (SM) as they interact. But in the wider context of SMN the state vector contains countless such datums and is operated on by the whole system matrix, indeed each row of the SM processes the entire SV and produces a new state for a particular SV element. Thus we have many primitive systems (SV elements) which are processed in a massively parallel manner.

Throughout most of the development of the concept of an FDIS I have focused on system models where the systems and subsystems are actual objects in a world or where they are abstract variables or pure states within a computation such as cases where they are components of a quantum probability distribution or dynamical variable associated with a Newtonian particle. But from the cyclic computational analysis we get causal strings (virtual system processes) that have 'vibratory' properties much like light, hence I call them 'rays' in this context. Thus when a SM row processes the SV and merges all of these rays into a single resultant state, these rays interfere just like light. There are many subtle phase variations and these create interference patterns.

Hence we have a field of primitive elements in parallel and also phase interference. This suggests to me that holographic concepts may be involved here. Could it be that an FDIS combined with cyclic computational elements produces a holographic computational context that is massively parallel and that creates an empirical simulation universe in the form of a hologram? Thinking of the primitive systems as explicit objects in a world out of which higher level systems are composed is analogous to pixels in a photograph and thinking of them as interfering rays that form a collective composite pattern is analogous to holographic elements within a hologram.

This physical universe and also consciousness itself exhibits some holographic properties that have been discussed at length in recent years. Thus it is possible that these mathematical models will lead in that direction.

But are there two levels of parallelism or are they equivalent and we have conceptualised them differently? The finite state automata (FSA) provides a massively parallel computational framework that allows every point to interact with every point in every moment of existence without any explicitly holographic concepts being required. But are these interacting points primitive systems within an empirical simulation or are they elements within a holographic medium?

If it is the prior case then the empirical context is directly simulated with the primitive datums within the computation being the actual primitive systems within the universe and the FSA computes the causal connectivity in a massively parallel manner. This is analogous to a photographic analogy where the primitive systems are pixels.

If it is the latter then the medium contains the empirical information distributed in a holographic manner, thus there are two levels of parallelism. In this holographic case the underlying computational framework processes the medium in parallel where each holographic element interacts with every other element in each moment and this dynamic holographic medium encodes the empirical simulation as phase interference patterns that are distributed throughout the entire medium. Only through a process of superposition and interference does the hologram of the simulation context (or the empirical universe) arise. This is possibly related to the wave particle duality where the particle picture is the holographic image and the many wave components that superpose to form that image are associated with elements within the holographic medium. These elements interact via the FSA and this implements the dynamics of the whole context. This was briefly mentioned earlier in regards to Phase Velocity and Particle Velocity.

In most respects the first level of interconnection within the FSA can explain all of the interconnectivity within the empirical universe and allow for quantum non-locality, but can it also explain the parallelism within systems such as the human brain and collective consciousness or perhaps the subtle field effects of the Akashic Field? Can it explain underlying phenomena that are not simply interconnected but are uniformly distributed such as the fields?

However, unless there are indeed explicit holographic effects, like fragments producing low resolution images of the whole, I see no need to include holographic phenomena. Is our image of a particle really a low resolution image of the entire cosmos? I think not, but a particle is a fragment of the holographic image and not necessarily of the holographic medium.

It could be that the massive parallelism and distributed nature of the FSA computational process led people to liken it to holograms when it is in fact a FSA rather than a hologram. But what of consciousness and memory, there are some rather holographic phenomena there I believe, and consciousness, in its most primal form, is the essence of the computation that underlies the cosmic simulation process.

Perhaps there are parallels between finite state automata and holograms that have yet to be explicitly realised, but more likely there are indeed two levels of parallelism, one within the FSA computation that implements the dynamics and one within the holographic medium that encodes a holographic empirical universe.

Final Comments

I do not as yet understand all of the details of how this Cyclic Computational Model fits into the general scheme of Finite Discrete Closed Information Systems so I am still a long way from being able to computationally simulate a realistic physical universe. However the confluence of intricate connections between this cyclic model, FDCIS's, string theory, the Akashic field, the holographic universe, quantum theory and general relativity suggests that these ideas may lead us in a direction wherein we may eventually unify quantum physics and general relativity and we may fully understand the meaning of string theory and thereby develop a “Theory of Everything”.

Furthermore, due to the intimate links between system theory and string theory this may provide us with a truly “Universal Theory of Everything” that is expressed not only at the lowest possible level of strings, quanta and particles but which may also be extended to all systems on all scales and in all contexts. It would also encompass all perceptual phenomena, such as our experience of the duality of subject and object and their unification within the non-dual transcendent information space thereby providing not only an outward science of reality but also a fully transcendent science that encompasses both the empirical perspective (traditional science – outward looking) and the subjective perspective (spiritual science – inward looking).

Nothing less than this deserves the name “Theory of Everything”.

For the broader context in which the above analysis took place see: The Mathematical Analysis.



www.Anandavala.info