The Information Theoretic Metaphysics of Ross Rhodes

Ross Rhodes' main website is http://www.bottomlayer.com.

Summaries:
A Cybernetic Interpretation of Quantum Mechanics: Quantum Mechanics Implies That The Universe is a Computer Simulation.
The Finite Nature Hypothesis: Edwin Fredkin proposes that reality is finite and discrete in all ways and that there exists an iterative cellular automata computational process that manifests the phenomenon of existence.

Related Documents:
Metaphysics of Virtual Reality
System Theoretic Metaphysics
Computational Paradigm
System Matrix Notation a mathematical theory of general information systems, virtual reality and the metaphysical origins of this reality.


A Cybernetic Interpretation of Quantum Mechanics

Below are some quotes from "A Cybernetic Interpretation of Quantum Mechanics" (Quantum Mechanics Implies That The Universe is a Computer Simulation) http://www.bottomlayer.com/bottom/argument/Argument4.html.

Abstract:
This paper surveys evidence and arguments for the proposition that the universe as we know it is not a physical, material world but a computer-generated simulation -- a kind of virtual reality. The evidence is drawn from the observations of natural phenomena in the realm of quantum mechanics. The arguments are drawn from philosophy and from the results of experiment. While the experiments discussed are not conclusive in this regard, they are found to be consistent with a computer model of the universe. Six categories of quantum puzzles are examined: quantum waves, the measurement effect (including the uncertainty principle), the equivalence of quantum units, discontinuity, non-locality, and the overall relationship of natural phenomena to the mathematical formalism. Many of the phenomena observed in the laboratory are puzzling because they are difficult to conceptualize as physical phenomena, yet they can be modeled exactly by mathematical manipulations. When we analogize to the operations of a digital computer, these same phenomena can be understood as logical and, in some cases, necessary features of computer programming designed to produce a virtual reality simulation

Wave Phenomena:
The more one examines the waves of quantum mechanics, the less they resemble waves in a medium. In the 1920s, Ernst (sic) Schrodinger set out a formula which could "describe" the wave-like behavior of all quantum units, be they light or objects... For a brief time, physicists sought to visualize these quantum waves as ordinary waves traveling through some kind of a medium (nobody knew what kind) which somehow carried the quantum properties of an object. Then Max Born pointed out something quite astonishing: the simple interference of these quantum waves did not describe the observed behaviors; instead, the waves had to be interfered and the mathematical results of the interference had to be further manipulated (by "squaring" them, i.e., by multiplying the results by themselves) in order to achieve the final probability characteristic of all quantum events. It is a two-step process, the end result of which requires mathematical manipulation. The process can not be duplicated by waves alone, but only by calculations based on numbers which cycled in the manner of waves.

Richard Feynman developed an elegant model for describing the amplitude of the many waves involved in a quantum event, calculating the interference of all of these amplitudes, and using the final result to calculate a probability. However, Feynman disclaimed any insight into whatever physical process his system might be describing. Although his system achieved a result that was exactly and perfectly in accord with observed natural processes, to him it was nothing more than calculation. The reason was that, as far as Feynman or anybody else could tell, the underlying process itself was nothing more than calculation... A process that produces a result based on nothing more than calculation is an excellent way to describe the operations of a computer program. The two-step procedure of the Schrodinger equation and the Feynman system may be impossible to duplicate with physical systems, but for the computer it is trivial.

This quality of something having the appearance and effect of a wave, but not the nature of a wave, is pervasive in quantum mechanics, and so is fundamental to all things in our universe. It is also an example of how things which are inexplicable in physical terms turn out to be necessary or convenient qualities of computer operations.

Wave particle duality:
as instruments were improved, it turned out that the interference pattern observed by Young [double slit experiment] was created not by a constant sloshing against the projection screen, but by one little hit at a time, randomly appearing at the projection screen in such a way that over time the interference pattern built up. "Particles" of light were being observed as they struck the projection screen; but the eventual pattern appeared to the eye, and from mathematical analysis, to result from a wave.

Investigating the mechanics of this process turns out to be impossible, for the reason that whenever we try to observe or otherwise detect a wave we obtain, instead, a particle. The very act of observation appears to change the nature of the quantum unit, according to conventional analysis.

the "wave function" is somehow "collapsed" during observation, yielding a "particle" with measurable properties. The mechanism of this transformation is completely unknown and, because the scientifically indispensable act of observation itself changes the result, it appears to be intrinsically and literally unknowable.

As John Gribbin puts it, "nature seems to 'make the calculation' and then present us with an observed event" (J. Gribbin, In Search of Schrodinger's Cat, 111.). Both the "how" and the "why" of this process can be addressed through the metaphor of a computer which is programmed to project images to create an experience for the user, who is a conscious being.

Ross Rhodes considers the case of VR where the observer is situated "outside" the virtual space, thus he speaks of images, users and interfaces. I consider the case of Artifical Reality (AR) where the observer is an emergent form within the simulated space. If one changes 'images' to "moments of experience" then one moves from VR to AR and his ideas then correspond with my own.

The "how" is described structurally by a computer which runs a program. The program provides an algorithm for determining the position (in this example) of every part of the image, which is to say, every pixel that will be projected to the user. The mechanism for transforming the programming into the projection is the user interface... for viewing one part of the image or another. When the user chooses to view one part of the image, those pixels must be calculated and displayed; all other parts of the image remain stored in the computer as programming... The user can never "see" the programming, but by analysis can deduce its mathematical operation by careful observation of the manner in which the pixels are displayed.

Regarding the ontological sameness of elementary particles:
If you were to study an individual quantum unit from a collection, you would find nothing to distinguish it from any other quantum unit of the same type. Nothing whatsoever. Upon regrouping the quantum units, you could not, even in principle, distinguish which was the unit you had been studying and which was another.

Roger Penrose has likened this sameness to the images produced by a computer. Imagine the letter "t." On the page you are viewing, the letter "t" appears many times. Every letter t is exactly like every other letter t. That is because on a computer, the letter t is produced by displaying a particular set of pixels on the screen. You could not, even in principle, tell one from the other because each is the identical image of a letter t. The formula for this image is buried in many layers of subroutines for displaying pixels, and the image does not change regardless of whether it is called upon to form part of the word "mathematical" or "marital".

Rhodes then discusses:
"Quantum leaps," as though there was no time or space between quantum events

It is a difficult intellectual challenge to imagine a physical object that can change from one form into another form, or move from one place to another place, without going through any transition between the two states. Zeno's paradoxes offer a rigorously logical examination of this concept... In brief, Zeno appears to have "proved" that motion is not possible, because continuity (smooth transitions) between one state and the next implies an infinite number of transitions to accomplish any change whatsoever. Zeno's paradoxes imply that space and time are discontinuous -- discrete points and discrete instants with nothing in between, not even nothing. Yet the mind reels to imagine space and time as disconnected, always seeking to understand what lies between two points or two instants which are said to be separate.

In quantum mechanics, however, there is no transition at all. Electrons are in a low energy state on one observation, and in a higher energy state on the next; they spin one way at first, and in the opposite direction next. The processes proceed step-wise; but more than step-wise, there is no time or space in which the process exists in any intermediate state.

Imagine that you are watching a movie... Now, the projectionist begins to slow the projection rate... motion which seemed so smooth and continuous when projected at 30 frames per second or so is really only a series of still shots. There is no motion in any of the pictures, yet by rapidly flashing a series of pictures depicting intermediate positions of an actor or object, the effective illusion is one of motion.

...Computers create images in the same manner... If the computer is quick enough, you do not notice any transition. Nevertheless, the computer's "time" is completely discrete, discontinuous, and digital...
Similarly, the computer's "space" is discrete, discontinuous, and digital...

The theory and architecture of computers lend themselves to a step-by-step approach to any and all problems. It appears that there is no presently conceived computer architecture that would allow anything but such a discrete, digitized time and space, controlled by the computer's internal clock ticking one operation at a time. Accordingly, it seems that this lack of continuity, so bizarre and puzzling as a feature of our natural world, is an inherent characteristic of a computer simulation.

Quantised Space:
The breakdown at zero, yielding infinities

The difficulty arises when the highly precise quantum calculations are carried out all the way down to an actual zero distance (which is the size of a dimensionless point -- zero height, zero width, zero depth). At that point [sic], the quantum equations return a result of infinity, which is as meaningless to the physicist as it is to the philosopher...

... it was discovered that the infinities disappeared if one stopped at some arbitrarily small distance ... instead of proceeding all the way to an actual zero. One problem remained, however, and that was that there was no principled way to determine where one should stop... The only requirement was to stop somewhere short of the actual zero point. It seemed much too arbitrary. Nevertheless, this mathematical quirk eventually gave physicists a method for doing their calculations according to a process called "renormalization," which allowed them to keep their assumption that an actual zero point exists, while balancing one positive infinity with another negative infinity in such a way that all of the infinities cancel each other out, leaving a definite, useful number.

... this is ... a revisitation of Zeno's Achilles paradox of dividing space down to infinity. The philosophers couldn't do it, and neither can the physicists.

... The need to resort to such a mathematical sleight-of-hand to obtain meaningful results in quantum calculations is frequently cited as the most convincing piece of evidence that quantum theory -- for all its precision and ubiquitous application -- is somehow lacking, somehow missing something. It may be that one missing element is quantized space -- a shortest distance below which there is no space, and below which one need not calculate.

In my mathematical analysis this is the Planck length.

Quantised time:
With quantized time, we may imagine that the change in such an either/or property takes place in one unit of time, and that, therefore, there is no "time" at which the spin is anywhere in the middle. Without quantized time, it is far more difficult to eliminate the intervening spin directions.

... the idea that time (as well as space) is "quantized," ... is still controversial. The concept has been seriously proposed on many occasions, but most current scientific theories do not depend on the nature of time in this sense. About all scientists can say is that if time is not continuous, then the changes are taking place too rapidly to measure, and too rapidly to make any detectable difference in any experiment that they have dreamed up. The theoretical work that has been done on the assumption that time may consist of discontinuous jumps often focuses on the most plausible scale, which is related to the three fundamental constants of nature -- the speed of light, the quantum of action, and the gravitational constant. This is sometimes called the "Planck scale," involving the "Planck time,"... On this theoretical basis, the pace of time would be around 10-44 seconds... And that is much too quick to measure by today's methods, or by any method that today's scientists are able to conceive of, or even hope for.

My own work proves that the Plank Scale provides an optimal computational regime for the simulation of a dynamical context using a finite discrete iterative computational process.

Although most of the foregoing is mere argument, it is compelling in its totality, and it is elegant in its power to resolve riddles both ancient and modern. Moreover, if we accept the quantization of space and time as a basic fact of the structure of our universe, then we may go on to consider how both of these properties happen to be intrinsic to the operations of a computer

Non-Locality:
As though all calculations were in the CPU, regardless of the location of the pixels on the screen.

the essence of non locality is unmediated action-at-a-distance. A non-local interaction jumps from body A to body B without touching anything in between... Without benefit of mediation, a non-local interaction effortlessly flashes across the void...

Even "flashes across the void" is a bit misleading, because "flashing" implies movement, however quick, and "across" implies distance traveled, however empty. In fact, non-locality simply does away with speed and distance, so that the cause and effect simply happen... There is no apparent transfer of energy at any speed, only an action here and a consequence there.

Non-locality for certain quantum events was theorized in the 1930s as a result of the math... In the 1960s, the theory was given a rigorous mathematical treatment by John S. Bell, who showed that if quantum effects were "local" they would result in one statistical distribution, and if "non-local" in another distribution. In the 1970s and '80s, the phenomenon was demonstrated, based on Bell's theorem, by the actual statistical distribution of experiments... the phenomenon appears recently to have been demonstrated directly at the University of Innsbruck. ("Entangled Trio to Put Nonlocality to the Test," Science 283, 1429 (Mar. 5, 1999).)

it was fair to ask whether apparent separations in space and time ... are fundamentally "real"; or whether, instead, they are somehow an illusion, masking a deeper reality in which all things are one, ... always connected one to another and to all. This sounds suspiciously like mysticism, and the similarity of scientific and mystical concepts led to some attempts to import Eastern philosophy into Western science.

The non-locality which appears to be a basic feature of our world also finds an analogy in the same metaphor of a computer simulation. In terms of cosmology, the scientific question is, "How can two particles separated by half a universe be understood as connected such that they interact as though they were right on top of each other?"...

In fact, the measured distance between any two pixels (dots) on the monitor's display turns out to be entirely irrelevant, since both are merely the products of calculations carried out in the bowels of the computer as directed by the programming. The pixels may be as widely separated as you like, but the programming generating them is forever embedded in the computer's memory in such a way that -- again speaking quite literally -- the very concept of separation in space and time of the pixels has no meaning whatsoever for the stored information.

The 'uncanny' utility of mathematics in the physical sciences:
As though physical manifestations themselves were being produced by a mathematical formula.

Even though the mathematical formulas were initially developed to describe the behavior of [the] universe, these formulas turn out to govern the behavior of the universe with an exactitude that defies our concept of mathematics. As Nick Herbert puts it, "Whatever the math does on paper, the quantumstuff does in the outside world." (N. Herbert, Quantum Reality, 41) ... The backwards logic implied by quantum mechanics, where the mathematical formalism seems to be more "real" than the things and objects of nature, is unavoidable.

Mr. Herbert states that, "Quantum theory is a method of representing quantumstuff mathematically: a model of the world executed in symbols." (N. Herbert, Quantum Reality, 41) ... the equivalence between quantum symbolism and universal reality must be more than an oddity: it must be the very nature of reality.

... the task for the Western rationalist is to find a mechanical model from our experience corresponding to a "world executed in symbols."

An example which literally fits this description is the computer simulation, which is a graphic representation created by executing programming code. The programming code itself consists of nothing but symbols, such as 0 and 1. Numbers, text, graphics and anything else you please are coded by unique series of numbers. These symbolic codes have no meaning in themselves, but arbitrarily are assigned values which have significance according to the operations of the computer [internally meaningful]. The symbols are manipulated according to the various step-by-step sequences (algorithms) by which the programming instructs the computer how to create the graphic representation. The picture presented on-screen to the user is a world executed in colored dots; the computer's programming is a world (the same world) executed in symbols. Anyone who has experienced a computer crash knows that the programming (good or bad) governs the picture, and not vice versa. All of this forms a remarkably tight analogy to the relationship between the quantum math on paper, and the behavior of the "quantumstuff" in the outside world.

Summary of The Finite Nature Hypothesis of Edward Fredkin

Below are some quotes taken from a discussion by Ross Rhodes on an earlier paper by Edward Fredkin. The respective documents are "The Finite Nature Hypothesis of Edward Fredkin" (www.bottomlayer.com/bottom/finite-all.html) and "Finite Nature" (www.ai.mit.edu/projects/im/ftp/poc/fredkin/Finite-Nature).

Overview:
Edward Fredkin's Finite Nature is the assumption that all things are discrete, step-wise and granular, rather than continuous, flowing or smooth. If this is the case, then all natural phenomena can be represented by scalar values -- numbers. Discrete properties can be represented by these scalar values as a finite set of information, and therefore all of nature can be represented by numbers. Because time itself is assumed to be discrete and step-wise, all transitions must take place "off-stage." The representation of the present moment is changed to a different representation of the future moment and then presented as another finite set of information. The model of a finite set of information represented by scalar values, transformed to a new and equally finite set of information, is to be found in the programming processes of an ordinary computer.

If we assume that all things in nature are discrete, it follows that all things can be described by a finite set of information. This set of information must be accompanied by a process causing it to evolve from one "frame" of time to the next. The most compelling model for meaningful transformation of one finite set of information to another is the computer, and in particular the computer architecture known as "Cellular Automata."

Edward Fredkin's hypothesis of Finite Nature [which] begins with the assumption that all things eventually will be shown to be discrete and, conversely, nothing exists in smooth or continuous form.

Information:
Finite Nature supposes that all properties can be expressed by numbers because all properties are discrete and step-wise. In this sense, Fredkin is referring to the data itself as the relevant information, rather than the meaning associated with the data.

Computation:
A finite amount of information cannot evolve without a processing mechanism. Information + Process implies a mechanism of computation.

In light of our assumption that time itself is discrete and step-wise, we are not permitted to imagine the information sliding or morphing from one state to the next; no, the information must be in one state in the present moment, and in another, different state in the next succeeding future moment. Moreover, the discrete nature of each separate property must effect a complete change from moment to moment, with no possibility of "transition." From one moment to the next -- from one time-step to the next -- the information must be transformed completely to a new and different, but equally finite, set of information describing the volume of space-time as it then exists.

The fact that we observe motion, and other aspects of a system which evolves over time, was the unsolvable problem for Zeno.[8] Fredkin, however, resolves the paradox neatly by reference to a system of apparent motion unknown to the ancient Greeks -- the common operations of a computer program. By reducing the volume of space-time to a symbolic representation of the information contained within it, Fredkin produces the full equivalent of the volume of space-time in the form of the information itself. ... Given Finite Nature, there is literally no difference between the information content of the volume of space-time and the "thing itself" because the information is the thing and vice versa.

As Fredkin explains, "Single clock means that when the computer is in a state, a single clock pulse initiates the transition to the next state. After the clock pulse happens, various changes propagate through the computer until things settle down again. Just after each clock pulse the machine lives for a short time in the messy real world of electrical transients and noise. At the points in time just before each clock pulse the computer corresponds exactly to the workings of a theoretical automaton or Finite State Machine."

The computer's process of transition, unknown to Zeno, is not reflected by the state of the computer's memory itself, but exists in the abstract application of the rule or rules. We do not "see" the transition, nor even the process of transition. As it happens, this is exactly the case with quantum mechanical transitions, or "quantum leaps," which were anticipated by Zeno but dismissed as absurd and impossible by Aristotle.

Cellular Automata
There is no difference in principle between the individual cellular automaton and any other computer. Both consist of a block of memory which is acted upon by a set of programming instructions. The difference in practical terms is that the cellular automaton operates according to a severely restricted set of instructions (programming), and so requires a comparatively modest amount of memory. With these limitations, we can create a vast number of cellular automata using our finite memory and programming resources. The key to the utility of the cellular automata ("CA") computer architecture is that when we assemble this vast number of simple, independent computing units, they can interact among themselves in breathtakingly complicated ways.

It is the interaction among the cellular automata which causes their states to evolve, and to evolve in a way that bears a relationship between past, present and future such that a pattern emerges.

Finite Nature ... posits the most fundamental unit as a processing unit, a bit of information, a "virtual computer" or subroutine” not as “physical cells bouncing around in space”.

Each cell acts independently in the sense that it follows its own rule or rules when deciding what it shall become in the next instant of time ... but the rule of each cell is exactly the same as that of every other cell, and each takes into account the state of each of its neighbors and determines its next future state on the basis of the input received from its neighbors' present state.

The simplicity of CA architecture gives rise to a vast complexity of interaction which, in turn, produces pleasing patterns of order as the scale increases. An examination of CA architecture shows that it has the potential to model all varieties of interactions in the natural world

Fredkin concludes: "Given Finite Nature, what we have at the bottom is a Cellular Automaton of some kind." And, because "Automata, Cellular Automata and Finite State Machines are all forms of computers," this is to say that at the bottom of the physics of the natural world, we have a computer of some kind.

all common computers are universal in the sense that their functioning is dependent on the software and independent of the hardware.

If the ultimate computer is universal, as it must be, then we should be able to simulate or, more properly, emulate its operations by the correct programming of our own feeble computers (in the way that a good programmer should be able to emulate a Cray supercomputer using a desktop PC). All of physics as we know it should be expressible as a computer program.

Quote from Fredkin:
What cannot be programmed cannot be physics. This is a very important idea. If a process cannot be programmed on a particular universal computer, despite having enough memory and time, then it cannot be programmed on any computer. If we can't program it on an ordinary computer, Finite Nature implies it can't be a part of physics because physics runs on a kind of computer.

The following is a list of Fredkin's conclusions as summarised by Ross Rhodes:
A. The fundamental process that we know as the physics of the natural world is an informational process.
1. The most fundamental stuff of physics is not the cells and bits. It is the result of the fundamental process run in the cells with the bits.

2. Information must have a means of its representation. If we had the right kind of magic microscope, we should always be able to see the digits that represent whatever information is present. Information is never "just there".

3. Physics is a consequence of the fundamental bits being engaged in the fundamental informational process.

B. Transformation of information is accomplished by a mechanism of computation which is not essentially different from the familiar computers of our technology.

4. The future is computed as a function of the present and (most likely) the past. One way or another, it is very likely a second order system.

5. The fundamental process that underlies physics must be computation universal. Since the test of a computation universal process is that one can program up a universal machine within it, the existence of computers proves that the fundamental process is universal.

6. What cannot be programmed cannot be physics. This is a very important idea. If a process cannot be programmed on a particular universal computer, despite having enough memory and time, then it cannot be programmed on any computer. If we can't program it on an ordinary computer, Finite Nature implies it can't be a part of physics because physics runs on a kind of computer.

7. The Engine is the computer that runs the fundamental informational process that runs physics.

8. The Engine and the fundamental informational process are computation universal.

9. Any universal Engine will do (given enough memory).

10. Physics is totally independent of the characteristics of the engine, so long as it is universal.

C. The physics of the natural world must be analyzed according to what is necessary and possible for computers to accomplish.

11. Momentum. Things don't just happen; everything is a simple consequence of the fundamental process. Viewed from the informational perspective, the idea that a particle just has a momentum makes no sense. If the momentum is reasonably precise, meaning that it is one of a large number of possible momenta, then it contains a substantial amount of information, and that information must have a means of its representation.

12. Inertia. Matter has inertia because
[there must be] some kind of process (like from a field) to change the momentum information (the information that governs the motion of the particle).

13. Accelleration. A particle accelerates when the environmental information, representing a field or a photon, interacts to change the information associated with the motion of the particle.

14. Yes, Virginia, there is an aether (a reference metric). Energy, time, spin and other properties of the world cannot be programmed (made to work in an automata) without the existence of a metric.

15. More about the reference metric. An informational process cannot represent motion without a reference metric; finite nature demands the existence of a metric. This is a difficult idea. The implication is that the computational substrate of quantum mechanics must have access to some kind of metric in order to create inertial motion. Whether or not higher level processes in physics can access that metric is another question. It is hard to see why such access need be forbidden, since every particle must access the metric in order to move. Of course it is also clear that experiments done so far fail to detect any such metric. The argument for asserting the metric is that inertial motion requires an informational process that can take information that represents the velocity and use it to change the position of the particle. There must be information available that allows the process to know the velocity relative to something (the metric) that persists throughout the space that the particle will travel through. Without access to a constant, fixed space-time metric or its informational equivalent, there is no way to create an informational process that can involve particles moving independently in rectilinear motion.

16. Quantum randomness. Digital Mechanics is deterministic with unknowable determinism. In general, physics is computing the future as fast as it can.

18. Prediction on the macroscopic ("classical") scale. Some special cases (like the motion of the planets) can be computed faster than physics, allowing for predictions.

19. Prediction on the microscopic ("quantum") scale. Most microscopic things, like quantum mechanics, cannot be computed on an ordinary computer faster than real time. From our perspective, the information bath that all events are immersed in will always be beyond our computing (in general). Thus, we call events dependent on that information "random".

20. The availability of shortcuts to prediction. The speed up theorem limits the detail with which we can know the future.

D. There is not, and there need not be, access to the hardware of the Engine from within the program.

21. Everything mentioned in items 1 through 20 above
[such as] the computer or engine that runs the fundamental informational process of physics; and the fundamental cells with their memory and bits which are intrinsic to the engine, is not a part of physics and is not to be found in this universe. Rather, it is stuff that exists somewhere, where, perhaps, the rules may be different. If we imagine using a computer to simulate air flow in a wind tunnel, we can think of that simulated airflow as a little universe following simple laws of physics. The computer is not made up of the air flowing in the simulated wind tunnel; the computer is not in the simulated wind tunnel, it is somewhere else.

Evolution of complex systems:
Fredkin asks why one cannot "clone" a Boeing 747 from scrapings taken from one of the passenger seats.

We see that an evolving system must begin with information -- whether in DNA strings or on a floppy disk or otherwise -- and that information must be matched to a process suitable for transforming the information into the final structure. For seeds, the process is self-replication, requiring an environment containing the elements found within the cell itself. For blueprints of mechanical systems, stored on floppy disks, the far clumsier process is a computer capable of reading and understanding the stored information, and acting upon it with all manner of ancillary manufacturing systems.

Accordingly, we begin with information in a digital state. (The genetic material of a tree or other living cell satisfies this definition through its molecular arrangement, which is not different in principle from any other data array.) We must then match the information to a process which will act upon the information and transform it into its eventual, future state.

Information matched to a transformative process. This is the recipe for development. It is also a fair description of the workings of a computer, in particular a cellular automaton. Given Finite Nature, it is ultimately a description of nature as physics has come to understand it. To Fredkin, this convergence is more than a curiosity: it is an insight into the fundamental workings of our world. Finite Nature means that our world operates as though it were the product of a computing system, and Fredkin sees that this is because our universe is an artifact produced by a computer of some sort.

www.Anandavala.info