**Finite Discrete Information**

See also:

The Mathematical Analysis,

In its most general form information is discernible difference, which may manifest in any medium. Therefore information is a truly general concept that can be applied to any context. The discernible difference encodes the information and thereby represents it or symbolises it in some coherent manner.

In general mathematics where quantities can be symbolically represented without being actually represented one can operate on the concept of infinity but outside of this context the concept of infinity cannot be actualised. By this I mean that one can represent an infinite set by calling it 'x' without ever having to explicitly represent the set. For example, one can operate on the set of integers without ever having actually represented the set of all integers. See this discussion for more on this: Does Infinity Exist?

But in a computational context one must explicitly represent a set for it to exist, which means that infinite sets cannot exist. This work ultimately rests upon the fundamental principles of computation which rests upon the principles of representation. If something cannot be represented and computed then it cannot exist because existence IS representation and change IS computation.

To represent something to infinite detail requires infinite information and to represent a value that is infinitely large requires infinite information, but an infinite amount of information would take an infinite amount of representational capacity and an infinite amount of time to compute unless one had infinite capacity; but in this infinite regime all finite well formed values lose their meaning.

A variable **x** is a way of storing information using discernible difference, so a number line is an information space and a numerical value is a particular symbol taken from that space. For example, the set of all integers forms a *group* that is closed under multiplication, addition and subtraction. These operations implement an *algebra* over the set of integers, which is effectively a causal framework that permeates the space and underlies the behaviour of all variables within that space and hence all phenomena within that space.

In the context of the __computational paradigm__ *real numbers* are idealised concepts that have arisen from our perception of the world; from our human perspective things seem infinitely detailed and the universe seems infinitely large. Theoretically, if one has two numbers, one on the order of 10^{-10000} and one on the order of 10^{10000}, if one mixes these numbers together in an equation all of their information is retained and one can extract them again later; this is because mathematics generally relies on real numbers so an infinite amount of information is representable and mathematics is non-entropic, i.e. it loses no information. But in practice when one tries to explicitly represent numbers in any realistic representational framework where the numbers aren't ideal concepts but are, for example, binary data one finds that discrete numbers behave differently to *real numbers*.

The implications of this limitation of resolution is that if the physical universe is an information construct computed using finite discrete data then EVERYTHING within that simulated world must be quantised. This accords with quantum physics and also with “*Edward Fredkin's hypothesis of Finite Nature [which] begins with the assumption that all things eventually will be shown to be discrete and, conversely, nothing exists in smooth or continuous form.*” (as described by Ross Rhodes in The Finite Nature Hypothesis of Edward Fredkin.").

Before we begin the analysis I'll introduce some general notation that applies to any state variable **x**; we define **dx** as the smallest non-zero value that **x** may exhibit, this is the quantisation factor for the state variable, for example **dx = 1** for the integers and **dx → 0** for the *real numbers*. Furthermore, **L _{x}** is the largest possible value for

If an information space is discrete then there is some minimum difference in value **dx** between neighbouring symbols. When **dx > 0** only a finite amount of information can be stored. For example, a modest integer may require 32 bits to represent the binary arithmetic value where the maximum value is **2 ^{32}=4,294,967,296** which has ten significant digits and

Consider a finite discrete information space such as that produced by a state variable **x** that may assume only a finite number of values within a finite range. The variable **x** has a *requisite variety* (or total number of distinct states) where **B** is the base and **q** is the number of symbols (or the Kolmogorov Complexity of **x**). So an ordered sequence of **q** symbols chosen from a symbol set containing **B** distinct symbols can represent **K _{x}** distinct states. The fundamental and most commonly used base is binary where

The set of distinct states within the information space has an inherent ordering principle arising from binary (or general combinatorial) arithmetic, which is the algebra of a binary information space. This ordering principle may be used to construct mappings between this information space and others, such as between the binary numbers from **[000,111]** and the decimal (**B = 10**) integers from **[0,7]** and so on. If two information spaces have the same requisite variety and each has an ordering principle, then there exists a simple 1-1 mapping between the two spaces and they are considered *equivalent*. However if there is no ordering principle there can still be an arbitrary 1-1 mapping represented by a transformation matrix. So the only requirement for a 1-1 mapping or equivalence between two spaces is equal requisite variety. I.e. if **K _{x} = K_{y}** then the

The distinct states may represent values in the range **[0, L _{x}]** or with a quantisation factor

**Definition:** A finite discrete state variable is any variable **x** such that there exists a largest finite value and a smallest non-zero representable value **dx > 0** . The value of the state variable is determined by **x = x _{min} + r_{x} .dx** where ,

In a software implementation the FD variable data structure is **{x _{min} , r_{x} , dx}** . One can also store

When **r _{x} = B^{q }- 1 = K_{x }-1** then

When **r _{x} = 0** then

A finite discrete variable can only exhibit certain distinct values so when an arbitrary value **y** is assigned, it is represented by the largest representable value **x** such that **x ≤** **y** , or in other words .

Suppose that **x'** is just less than **dx**_{ }, i.e. **x' = dx - Δ** where . Then consider **y = x + x'**

So no effect has been registered by the state variable and the information conveyed by **y** has been lost due to *quantisation entropy*. Quantisation entropy plays an important role in determining the nature of the information systems that may form in FD spaces and therefore the types of empirical universes that may arise.

A simpler type of FD state variable has **K _{x}** distinct states representing values in the range

Consider a complex variable **z** where **p = z.z ^{*}** is the computational power and the real part of

Furthermore, since the components of **z** are finite and discrete, the variations in the state variable **v** are also finite and discrete. Thus if the computational power **p** is large then the range is large and small variations of the balance between the real and imaginary components are possible so the *quantisation* of the resulting state variable is small and the *resolution* is high, hence the dynamics are very smooth and classical-like as well as open and expansive. If the transcendent power is small then the range is small and the quantisation is large and the resolution is low, hence the dynamics are very grainy or jittery or quantum or non-classical as well as closed in and cramped.

Consider the case where the real and imaginary components of a complex variable **z = a + i.b** are constrained so that their quantisation is **da = db**. Now consider the case of equal maximum power where **L _{a} = L_{b}** hence when the first positive number is which gives and so . Thus the requisite variety has reduced and it takes two state variables

Non-entropic computation within a finite discrete context can only arise if the information spaces associated with all computational variables are appropriately aligned, then there is closure. If two variables are proportional (i.e. if **x = c.y** where c is some constant) then their spaces must be equivalent (equal requisite variety). But if **x = y ^{2}** then

If there are values **dw** and **dy** such that **dw.dy < dx** then the equation **x = w.y** can result in *underflow* where the calculated value is too small to represent in the resultant state variable, hence quantisation entropy destroys the information. If **dx ≤ dw.dy** then there can be no underflow.

Underflow leads to loss of information in the least significant bits of low value computational data so, other than losing information it's behaviour is well formed. It is therefore allowed to occur in the context of virtual computations and provides an effective floor to the simulation space, below which signals cannot register so dynamics cannot occur. However in the context of the lowest level transcendent data there can be no underflow otherwise the state variable power could gradually leak out of the state vector and the computational process would wind down to an irreversible halt.

If there are values **L _{w}** and

Overflow leads to serious corruption of the most significant digits of high value computational data leading to nonsensical dynamical behaviour and cannot be allowed to occur at all. Thus there is a strict and unbreakable limit on energy, velocity and the maximum values of all dynamical variables within the simulated dynamical space. This is built into the low level structure of the dynamical simulation.

So long as the two conditions of no underflow and no overflow are met, *closure* is ensured and one can conduct computations within a finite discrete context without incurring any disruption to the overall computation. One can treat the variables as if they are *real numbers* with unlimited capacity and the inherent structure of the computation will ensure that their capacity is never exceeded so their limits are never directly encountered, so from within the computational space they seem, for all intents and purposes to be unlimited. Indeed, from within a simulation universe this is the closest that one can come to the concept of *infinity*, i.e. transcendently finite but empirically unlimited.

By this I mean that the transcendent process (TP) is indeed finite but the programming and dynamics are such that from an empirical perspective, within the simulation, there is nothing that we could conceivably require of the TP that it could not deliver; in that sense it is unlimited in regards to us. The constraints are built into the empirical universe at such a low level that an empirical system cannot encounter them, so within these constraints the system is free to evolve and the TP seems *omnipotent*.

There are however effects arising from the use of FD variables; values produced by the computation may not always align perfectly with the set of potential values of the resultant state variable so the value is approximated. This creates a degree of 'jitter' or uncertainty within the calculation which produces a low level 'noise' throughout the computation. Whilst this does produce entropy to a degree, the jitter is randomly and evenly distributed so there is no overall corruption of the computation, such as would arise from overflow. Indeed a zero noise condition is a sub-optimal operating condition for any complex dynamical system.

In general an information space is an ordered field of discernible difference; it is a structured context within which forms may arise. The space has a topological structure or algebra that defines how each point relates to each point and through this connective network the forms may flow. Thus an information space is a space within which information *patterns* may be represented and transformed. These patterns may represent empirical systems within a simulation space so the information space provides a context within which these systems can exist, interact and dynamically evolve. The information space is the underlying foundation of the simulation space; it provides the lowest level representation, connectivity and computation for all empirical systems.

A finite discrete information space is significantly different to a common space. For instance, whenever two values are multiplied, added, subtracted or divided, they produce a new value that must lie within the valid range and resolution. If not they are approximated thereby discarding information. So either the space forms a *group* that is closed (non-entropic) and can therefore sustain a dynamical process or they form what I call a *semi-closed group* that behaves like a group in most cases but it is slightly *open* so that information may be lost and the dynamical process may wind-down over successive iterations.

Within a simulation universe the effects of finiteness and discreteness are not significant when one is far from the very small or very large scale. In the mid range, quantisation entropy may causes loss of information only in the nth decimal place and the finite boundaries are well beyond reach. However at the very small scale, when one's values **v** approach **v ≈ dx**, quantum effects dominate and quantisation entropy can create a lower limit for the simulation. If **v → L _{v}** then the simulation approaches its upper limits and relativistic effects dominate to ensure closure.

If the simulation is of Newtonian dynamics and **x' = x + v.t**, then quantisation entropy has effectively created a constraint on the lowest possible velocity **dv** that may effectively operate within this finite discrete space, i.e. we require that **dv.t ≥ dx** otherwise it cannot register an effect on **x**. Furthermore, the finite size of **L _{x}** implies a maximum possible velocity, i.e. we require that

The following are examples of how finite discrete constraints create phenomena within information spaces where there would otherwise be few. In each of the two following cases, if the space was classical and had infinite resolution there would be nothing more than concentric rings of colour however we see that the finite discrete constraints produce a menagerie of unlimited variety. In each case we have a two dimensional FD space (a bitmap) within which values are calculated and represented using only 256 colours (finite perceptual resolution) so the values are represented modulo 256 and this same colour range is used repeatedly thereby creating bands of colour. These bands would still only be concentric rings however if it wasn't for the FD constraints on the bitmap.

Note: if a system had a perceptual resolution equal to or greater than the colour range of the image then they would see only a single halo of colours radiating outward, it would not be broken into repeated segments by a modulo operation. Thus perhaps if a being had such a perceptual resolution they would perceive the whole of their universe as a field of gradual variations where everything is an interconnected whole. It is the modulo process that breaks this into segments or pieces and the lower one's perceptual resolution the more fragmented is ones view. This is perhaps related to the appearance of the mechanistic perspective from out of the true transcendent field which is One and Whole.

I have elsewhere explored the effects of different perceptual resolutions, and the same underlying field structure can appear completely different when viewed with a different perceptual resolution, thus the field that a system experiences depends on how it perceives and interacts with that field. The easiest way to get a feel for this is by stepping back from the image and viewing it from a distance, or on an LCD screen one can look at them from an angle, or one can rescale the images, or one can also load them into a graphics program and apply various filters to them such as a Gaussian blur, which resolves out of the complexity some of the larger patterns whilst obscuring the fine detail. Some very beautiful images can be derived in this way.

Below is an example detail taken from the center of one of the FD force field images, which has had a blur applied to it...

This sequence of images illustrates Pythagoras' theorem calculated within a finite discrete space, which provides a distance-from-origin metric within that information space. It is nothing more than **r ^{2} = x^{2} + y^{2}** where

The sequence is produced by zooming in to greater magnifications and drawing out the quantisation effects. The progression of images goes on indefinitely and there is considerable variation in them, however there are common recurring themes of particle like patterns forming lattices and of four armed crosses and eight armed crosses. The program that I wrote to produce these creates an ongoing animation where one can observe these patterns as they slide and merge into each other, it is very hypnotic. The selection shown illustrates only a fraction of the variety and complexity of these phenomena.

Initially we see that there are only *classical* concentric rings but in the second image we see fringes appearing and these eventually break out into full scale quantised complexity.

This sequence of images illustrates the equation where **r ^{2}** is calculated as above. This equation is related to the gravitational field surrounding our planet or an electrostatic field surrounding a charged particle and so on. This sequence of images contains a definite theme of an unlimited number of elaborate Christian crosses, Islamic style octagonal geometries, small Tibetan dorges and so on. The range of religious imagery here is remarkable, I cannot help but think that countless mystics have observed these phenomena within the depths of their consciousness for millennia.

These images show only the field surrounding a single particle (or planet) but in other programs I have explored the quantised fields between multiple particles and allowed them to move about so that the fields change and fluctuate. However those programs were written several years ago and I seem to have misplaced them. Others may wish to explore these fields in more detail; especially because of the unlimited variety of beautiful images that arise from them. They are like fractals in that sense and are very easy to program.

Below I have provided some of these programs as Euphoria source code and also as executable programs but in the executables the parameters are fixed and they may only be run on windows systems at present. However if you download the free Euphoria Program you can explore them and run them on any platform. Although the code that I wrote was never meant for public consumption so it is only the bare essentials required in order to produce these images, the code is really very crude.

{{add links to source code and executables}}