By John Ringland (www.anandavala.info/) last updated on 2006/08/02

Related Documents:

The
Computational Paradigm

(#1367) Process
Metaphysics and Computational Paradigm

(#1406)
Computational Metaphysics

(#1415) SMN,
Free Will and Unification of Paradigms

(#1418)
SMN, Computational Metaphysics, Free Will and Duality

(#1427)
Labels, Essence, Awareness, Computation, SMN

(#1428)
Free Will, Attitude, Awareness, Self Control, Causality, Karma,
Cosmic Will, Computation and Consciousness

(#1430)
Metaphysics of Virtual Reality

(#1437)
The Chinese Room, Experience, Knowledge and
Communication

Computational
Processes (proof)

(#1470)
Religion/Spirituality, Energy/Information and the Unification of
Material and Spiritual Science

(#1663)
System Theoretic Metaphysics and the Unification of the Transcendent
and Empirical Sciences

Also see other
excerpts from my discussions with the Society
for Scientific Exploration.

This is inspired by posting #1423 on the SSE discussion forum where David Axelrod proposed an equation describing the action of intentional influence.

This discussion explores the mathematics of influences applied to probability distributions. Starting from the proposition that (influence = force*focus) I explore the implementation issues that underlie such a scenario. I first show that the direct use of probability distributions (direct approach) leads to problematic inter-dependencies due to intrusive constraints. I then show that a more computationally oriented approach (indirect approach) leads to an efficient algorithm that represents the situation and which sheds light on the nature of intentional influence as an individual and as a group.

In the computational context described below an intentional agent can apply arbitrary force and arbitrary focus, and the resulting probability distribution is always valid. Independent agents may act freely without any inter-dependencies due to the underlying representational format. The degree of focus and the degree of coordination (common focus) determines the degree of effect, thus a group of focused and coordinated intentional agents produces the greatest effect.

Then I develop a mathematical model of will power, which describes the action of intentional influence or the use of will power to influence the state of a system. The model describes detailed relations between the quantities focus, force, will, will power, influence and effect. There are three principle behaviours of the model that arise from its mathematical details, which may accentuate or inhibit one's ability to influence a system in different contexts.

The model gives rise to quantifiable and testable predictions regarding the degree of effect on systems in different scenarios. These could be experimentally tested using REG machines or other such experiments. If the experimental results confirm the theoretical predictions then this would lend credence to the model as a description of the underlying information theoretic processes that implement the phenomenon of intentional influence. In such a case the model would also give a quantifiable measure of will power, so one could use REG machines in conjunction with the model to measure an agents will power. If the experimental results do not confirm the theoretical predictions the general approach exhibited here could be refined to produce variations that more closely match the experimental data. In this way I propose a possible method for the scientific analysis of will power and intentional influence.

David Axelrod wrote:

>I developed a preliminary model based on the

> notion that practical free will requires, at a minimum, the capacity

> to change the probability of events. I decomposed this into:

>

> Probability = Nomicity*Rationality

>

> where Nomicity (from nomos or management) is something like the

> conditional probability of an outcome given an entities

> Rationality,and can be considered Will Power

>

> and Rationality is the focus, or distribution, over outcomes ,and

> can be considered Will.

We are talking about a change in probability so I will use the equation:

Δp = n * r

Where
'r' is the **focus** of
intention (distribution of will) and 'n' is the **force**
of intention (force of will).

But probabilities don't come in isolation. Given some particular potential event there is a range of possible outcomes each with an associated probability. In quantum physics this is conceptualised as a wavefunction 'Ψ', which is a distribution of probability over a range of possible states 's'.

A simple example is a REG machine (see PEAR), which is a binary quantum device that is essentially a perfect coin toss. In the absence of any influence it produces heads or tails with a 50/50 probability. But experiments show that a focused will can influence the probabilities.

In
this example the probability distribution is P = [ p(heads) ,
p(tails) ] = [ p_{0} , p_{1}] and

Δp_{i}
= ( n_{i} * r_{i} ) for each possible state.

It is conceptually elegant to consider this situation only in terms of probability distributions as normally conceived of within mathematics. But it is shown below that this is not the most computationally elegant method and the computational method has close similarities with the functioning of everyday reality.

If the underlying representational approach directly uses probability distributions then certain constraints are imposed upon the functioning of intentional influence.

The distribution is normalised such that the sum of the probabilities over all possible states is equal to one. This implies that the probability of 'some' outcome is one but the exact state of that outcome is only probabilistically determined. I.e. there will definitely be some outcome but it is uncertain as to exactly which.

Σ_{i}
p_{i} = 1 : normalisation

This normalisation condition imposes a constraint on the action of intention upon the probabilities. I.e. if one probability increases the others must decrease to maintain normalisation. Hence all perturbations must cancel when summed over the entire distribution:

Σ_{i}
Δp_{i} = 0 : conservation of normalisation

Thus
Σ_{i}
( n_{i} *
r_{i} ) =
0

So by influencing the probability of one outcome we also influence the probabilities of other outcomes. E.g. willing more heads than tails is equivalent to willing less tails than heads. Thus the functions 'n' and 'r' are applied over the entire distribution and must conform to this normalisation conservation constraint.

There is another constraint due to the fact that probability must be a value between zero and one, i.e. 0 ≤ p ≤ 1

Hence, when our intentional influence Δp alters a probability the resulting value must still be a valid probability. Hence:

0 ≤
p_{i} + Δp_{i} ≤ 1 thus -p_{i} ≤ Δp_{i}
≤ 1 – p_{i}

This
applies to each p_{i} individually.

This constraint implies that we are only free to influence a system within the constraints of its range of possible states – we cannot will an impossible outcome such as heads having a probability of 2 and tails a probability of -1.

Thus
-p_{i} ≤ (n_{i} * r_{i}) ≤ 1 – p_{i}
and the functions 'n' and 'r' must be functions of the current
probability.

Thus, if we directly use probability distributions the two main theoretical constraints on the action of free will, in terms of force and focus, are:

-p_{i}
≤ ( n_{i}
* r_{i} )
≤ 1 – p_{i} and
Σ_{i}
( n_{i} *
r_{i} ) =
0

The functions 'n' and 'r' are dependent upon the current state of the probability distribution. This means that in the act of intentional influence there are complex flows of information (computation) required and the process can be quite complicated. The situation gets worse when there are multiple intentional agents.

The above considers the case of a single intentional agent influencing a single probability distribution for a single event. What if multiple intentional agents seek to influence the same event? How do their influences combine and can they act independently?

If the actions are considered jointly then the joint probability is = (the probability of A AND the probability of B given A) OR (the probability of B AND the probability of A given B)

p(AB) = ½ * ( p(A) * p(B | A) + p(B) * p(A | B) ) : normalised to one

where p(X | Y) is the probability of X given Y.

If the two are truly independent then p(X | Y) = p(X), hence

p_{i}(AB)
= p_{i}(A) * p_{i}(B)

=
n_{i}(A) * r_{i}(A) * n_{i}(B) * r_{i}(B)

=
n_{i}(A) * n_{i}(B) * r_{i}(A) * r_{i}(B)

=
n_{i}(AB) * r_{i}(AB)

which describes joint force and joint focus for a collective system.

But in the context of the above constraints the systems are not truly independent so:

p(X | Y) ≠ p(X)

and the above joint equation is just an approximation.

Thus the actions of an intentional agents depend upon the actions of all other intentional agents and there are complex inter-dependencies requiring complicated information flows (computation) to implement the act of intentional influence.

This scenario suggests that the influence over a system operates over the entire probability distribution and thereby affects the probabilities of all outcomes. This then places constraints on the form of the function of 'rationality' (focus of will). Furthermore there are absolute constraints on the magnitude of the influence, which places constraints on the form of the function of 'nomicity' (force of will). These constraints also apply to competing influences as well, making these influences inter-dependent.

Hence when we will for a particular outcome we are also willing against other outcomes according to some distribution over those outcomes (rationality function). Furthermore, when we will with a particular force (nomicity function) for a particular outcome the magnitude of that force depends upon the probability value of the outcome. Plus due to the constraints operating on competing influences, our own rationality and nomicity are not truly independent, but are in fact dependent upon the rationality and nomicity of the other competing intentional agents, as well as the probability distribution of the event that we are trying to influence.

Thus the use of direct probability distributions leads to complicated inter-dependencies. But there is a simpler way.

Instead of directly representing the situation with probability distributions, suppose that there is an underlying representational form that is more computationally elegant.

Let
X = { [ x_{0} ,
x_{1} ,
... ] , T } where
and x_{i} ≥ 0
and T = Σ_{i} x_{i}

This
is the base level representation, where the x_{i}'s can be
any positive value (in practice they are finite discrete pseudo
'real' numbers) and T is their sum or their Total.

When a probability is required it is calculated as:

p_{i}
= x_{i} / T and Σ_{i} p_{i} = Σ_{i}
( x_{i} / T ) = Σ_{i} x_{i} / Σ_{i}
x_{i} = 1

So the resulting probability distribution is always normalised to one.

When
an x_{i} value is altered so is the total:

x_{i}
= x_{i} * Δx_{i} : the influence is applied to the
state value.

T = T
+ x_{i} * Δx_{i} - x_{i} : the change in the
x_{i} value is added to the total.

Thus the normalisation is conserved.

Note
that Δx_{i} is a multiplicative factor, if it was an
additive factor then the same intentional power would have different
effects depending on the underlying representational format. For
example:

Let
Δx_{0} = 1 and the underlying representation be { [ 1 , 1 ]
, 2 } which is equivalent to the probability distribution [ 1/2 , 1/2
] . Then apply the influence x_{0} = x_{0} + Δx_{0}
= 1 + 1 giving:

{ [ 2 , 1 ] , 3 } ≡ [ 2/3 , 1/3 ]

But if the underlying representation was instead { [ 100 , 100 ] , 200 } ≡ [ 1/2 , 1/2 ] which gives the same probability distribution as above, the result is:

{ [ 101 , 100 ] , 201 } ≡ [ 101/201 , 100/201 ] ≈ [ 1/2 , 1/2 ]

Thus a greater influence is required in the second case to create an equivalent effect. But the underlying representational format should not influence the outer empirical nature of the scenario so the influence must be a multiplicative factor.

Since
the influence Δx_{i} is a multiplicative factor Δx_{i}
= 1 implies no influence. But the quantity n_{i} * r_{i}
= 0 implies no influence.

So
consider the relation: Δx_{i} = 1 + (n_{i} * r_{i})

But
the x_{i} values
can be any **positive** value
thus Δx_{i} ≥ 0
and (n_{i} *
r_{i}) ≥
-1

Thus
the quantity (n_{i} * r_{i}) takes inhibiting values
in the range –1 to 0 and accentuating values in the range 0 to ∞
(or maximum representable value) . This function is lopsided in that
there is a greater density of values in the inhibitive range than in
the accentuating range. Perhaps some other function of 'n' and 'r'
can provide a more symmetric function.

Consider the function:

Δx
= e^{(n * r)}

Then
(n * r) can vary from -∞ to ∞ (arbitrary power and any number of
influences) and when (n * r) = 0 then Δx = e^{0} = 1 (no
influence). Thus the Δx_{i} ≥ 0 constraint is encapsulated
by the exponential function. This need not be the 'natural'
exponential function, i.e. 'e' need not be the base of the natural
logarithm. The exact value could be determined by experiment.

Using this indirect representational format the situation allows the intentional agent to focus only on influencing a particular state or subset of states without breaching the normalisation constraints, which are encapsulated within a computational layer.

Furthermore the degree of focus determines the magnitude of the resulting effect. The following two examples illustrate this:

If an agent focused all of their will power into a single outcome:

{ [ 1 , 1 , 1 , 1 ] , 4 } ≡ [ 1/4 , 1/4 , 1/4 , 1/4 ] : initial condition

{ [ 5
, 1 , 1 , 1 ] , 8 } ≡ [ 5/8 , 1/8 , 1/8 , 1/8 ] : Δx_{0} =
5

but if they applied this level of will power in a totally unfocused manner:

{ [ 5
, 5 , 5 , 5 ] , 20 } ≡ [1/4 , 1/4 , 1/4 , 1/4 ] : Δx_{i} =
5

then there is zero effect regardless of how much effort is applied! This is related to the "normalisation reduction effect" that is discussed later.

Note also that this is an iterative regime so successive influences are applied one after the other – thus a sustained focus has greatest effect whereas a wandering focus will weaken the effect. If the focus wanders over the entire range of system states then, over time, this is equivalent to an unfocused will.

This computational approach also allows multiple intentional influences to act independently without breaching the normalisation constraints, which are encapsulated within a computational layer.

Conflicting influences cancel each other out resulting in minimal effect and coordinated influences combine resulting in maximal effect. The following two examples illustrate this:

Consider
two intentional agents A and B that are attempting to influence a REG
machine or any binary outcome. Firstly consider the conflicting case
where Δx_{0}(A) = 5 and Δx_{1}(B) = 5 (each wills
for a different outcome). If the initial condition is { [ 1 , 1 ] , 2
} ≡ [ 1/2 , 1/2 ] then after the influence is applied it becomes {
[ 5 , 5 ] , 10 } ≡ [ 1/2 , 1/2 ] and zero effect occurs.

Now
consider coordinated influences where Δx_{0}(A) = 5 and
Δx_{0}(B) = 5 (each wills for the same outcome). If the
initial condition is { [ 1 , 1 ] , 2 } ≡ [ 1/2 , 1/2 ] then after
the influence is applied it becomes { [ 25 , 1 ] , 26 } ≡ [ 25/26 ,
1/26 ] and a significant effect occurs.

Thus competing intentions cancel out resulting in wasted effort (entropy) and coordinated intentions magnify each other resulting in accentuated effect.

In this scenario the actions of separate agents are independent so:

p(X | Y) = p(X) and the joint influence

p_{i}(AB)
= p_{i}(A) * p_{i}(B)

=
n_{i}(A) * r_{i}(A) * n_{i}(B) * r_{i}(B)

=
n_{i}(A) * n_{i}(B) * r_{i}(A) * r_{i}(B)

=
n_{i}(AB) * r_{i}(AB)

Thus one can speak of joint focus and joint force.

This second method is the most computationally elegant approach. It is far more efficient and flexible as well as giving rise to outward behaviour that is similar to "real world" phenomena. If one was programming this scenario in software it is this approach that would be the most obvious, thereby encapsulating the normalisation constraints and providing greater computational efficiency and flexibility. The utility of this approach in describing the dynamics of intentional influence is perhaps one more indication of the computational foundations of reality.

An intentional agent can apply arbitrary focus and arbitrary force, and the resulting probability distribution is always normalised. Furthermore independent agents may act freely without any inter-dependencies due to the underlying representational format, such as conservation of normalisation.

Some implications are that the degree of focus and the degree of coordination determines the degree of effect, thus a large group of focused and coordinated intentional agents produces the greatest effect.

**Computational
representation:**

X
= { [ x_{0} ,
x_{1} ,
... ] , T } where
and x_{i}
≥ 0 and T = Σ_{i}
x_{i}

**Equivalent
probability:**

p_{i}
= x_{i} / T

**Influence
in terms of force and focus:**

**Application
of influence:**

x_{i}
= x_{i} * Δx_{i} : the influence is applied to the
state value.

T = T
+ x_{i} * Δx_{i} - x_{i} : the change in the
x_{i} value is added to the total.

Consider
**will** (w)
where **influence** is
Δx_{i} =
e^{w} .

The
quantity 'will' is an output signal from an intentional agent that
determines the influence upon a system. If will is a signal, with
+'ve and –'ve amplitudes, consider the power of the signal, this is
the total **will power**;
P = Σ_{i} w_{i}^{2}
where w_{i}^{2}
is the will power directed toward outcome i.

For a given system the total will power is a finite quantity that can be employed in the act of intentional influence. Thus the total will power is a conserved quantity, i.e. a system cannot create more influence than its total will power allows for.

Let w
= n * r (force * focus) thus Σ_{i} ( n_{i}^{2}
* r_{i}^{2} ) = P

The
focus function r_{i} distributes the will power by focusing
it more onto some outcomes than others, but there is only a finite
conserved amount of will power to be directed, thus:

Let Σ_{i}
r_{i}^{2} = 1 thus n_{i}^{2} = P and
Σ_{i} ( n_{i}^{2} * r_{i}^{2}
) = P * Σ_{i} r_{i}^{2} = P

Σ_{i}
r_{i}^{2} = 1 requires that | r_{i} | ≤ 1

In order to model situations where there are multiple intentional agents and multiple intention targets (e.g. REG machines) we need to extend the above model using an extended form of matrix algebra. System Matrix Notation (SMN) provides certain algorithmic extensions to the underlying matrix algebra. For example, when a matrix is defined it is also defined with a row function, an accumulator and a pairwise function. These respectively form the sequence {rf, af, pf}.

When the matrix is multiplied by the vector the pairwise function determines how each pair of values is combined into a single value. For common matrix multiplication the pairwise function is multiplication: pf (x,y) = x * y .

The
pairwise functions produce a sequence of results, which are combined
by the accumulator function. In common matrix multiplication the
accumulator function is summation: af(x) = Σ_{i} x_{i}
.

The row function operates on the result of the accumulator, thus it operates on the combined result of the entire row. In common matrix multiplication the row function is a null function: rf(x) = x .

The equation below uses short hand where the full matrix specification are:

Thus the pairs are multiplied and then remain as a sequence of results.

Thus the pairs are multiplied, then the results are summed together and then an exponential function operates on this result.

This will become clearer in the examples below.

Consider the case of a single agent willing a particular outcome on a single REG machine or any binary system.

Let x_{0}
represent the likelihood that the outcome is heads and x_{1}
represent the likelihood of tails.

Consider the situation of two agents influencing a single binary system:

Consider two agents A and B, agent A wills only for more heads than tails (accentuates p(H)) and agent B wills only for less heads than tails (inhibits p(H)).

Thus the resulting combined will undergoes destructive interference and if the two agents have equal will power then there is no net influence.

Consider two agents A and B, agent a wills only for more heads than tails (accentuates p(H)) and agent B wills only for more tails than heads (accentuates p(T)).

Thus both state values are influenced and if the two will powers are equal then there is no net influence once the state values are normalised into probabilities.

Above we considered the case of one agent influencing one REG and two agents influencing one REG, but what if there were multiple REG's?

Consider one agent and two REG's A and B:

And similarly for more REG machines, A, B, C, and so on.

It will be useful to have a quantifiable measure for 'effect' that captures the change of a system due to incident influence. We wish to measure the degree to which the probability distribution has changed. The difference can be positive or negative so consider the square of the difference. This is the difference power, which when summed over all states and systems gives a measure of the total change in the system.

Let D_{i}
= ( x_{i} * Δx_{i} - x_{i} )^{ 2} be
the difference power for state i

and D
= Σ_{i} D_{i} is the total difference power.

Minimum difference is when no change occurs and D = 0.

Maximum difference is when the system moves from one extreme to another, such as:

[ 1 ,
0 ] → [ 0 , 1 ] so D = (0-1)^{2} + (1-0)^{2} = 1 +
1 = 2

This measure is the only one used in this analysis however there are two other measures that may be useful in different situations so they are mentioned below for completeness.

If there is a particular target state that we wish to attain we can measure the effect on the system based upon how close the resulting state is to the target state.

Let R be the resulting state; I be the initial state and T be the target state.

(T-I)^{2}
is the initial difference from the target state.

(T-R)^{2}
is the resultant difference from the target state.

(T-R)^{2}
– (T-I)^{2} is the movement toward the target state.

(T-R)^{2}
– (T-I)^{2} = T^{2} – 2TR + R^{2} – T^{2}
+ 2TI - I^{2}

= 2T(I
– R) + R^{2} - I^{2}

This value can be calculated for each probability and then summed into a total measure of the movement toward the target state.

The Shannon Information measure gives a measure of the disorder of a probability distribution.

If a binary probability distribution is [½, ½] then H = 1 bit (maximum for a binary system) but if the distribution is [1,0] then H = 0 bits.

Thus when the distribution is totally disordered (undefined) then H = maximum and when it is totally ordered (fully defined) then H=0.

If we consider the quantity this gives a measure of the order of the system.

When the distribution is fully defined then and when it is fully undefined then , where 0 < min < 1.

This Shannon Information based measure is not an effective measure of 'effect' because [1, 0] and [0,1] have the same degree of order but if a person influenced the system to change from one to another then a significant effect has been produced but there is no change in the overall order.

This measure could be useful in some REG experiments such as the Global Consciousness Project GCP where there is no attempt to intentionally influence any outcomes and the REG machine is simply used to measure the degree of coherence of the ambient field of consciousness. Coherent consciousness tends to create more order in the REG output and incoherent consciousness creates disordered output.

When
one focuses one's will power across numerous target states the
overall amplitude increases. This is due to the fact that the power
distribution is normalised to one. If the power is focused on a
single state then r^{2} = 1 and r = 1. But if the focus is
directed at numerous target states then r^{2} < 1 and |r|
> r^{2}. This last fact produces an amplitude
magnification effect.

So if
the focus is spread across many target states the focus r_{i}
on each state tends toward zero and the amplitude magnification tends
toward infinity. This magnification is further enhanced by the
exponential function, thus the value of 'e' could be tuned to adjust
the impact of this effect on the final result.

This effect indicates that one produces the greatest overall effect by spreading one's focus broadly. However there are other phenomena that complicate this matter, these are discussed below.

When the state value distribution is normalised into a probability distribution the sum of all state values is calculated and each state value is divided by this sum. Thus the actual values of the state values have no impact on the resulting probability; it is the relative values that matter. This has effects on how the resulting probability distribution behaves.

One aspect of this was illustrated earlier where if every state of a system is influenced by the same amount then the relative values remain unchanged and the resulting probability distribution is unaffected. This also effects influences that are not completely focused on a single state of a system, if the influence changes several state values then the relative change of the entire distribution is reduced.

Another aspect of normalisation reduction relates to the size of the distribution. For example, if there is a binary system where one state is influenced so that it doubles in value:

but if the same influence is applied to one state of a 10 state system, then:

So the same influence upon a single state results in the same change in state value but due to normalisation reduction the resulting effect is much smaller upon the more complex system (0.13 times).

The power/amplitude magnification effect indicates that it is best to spread one's influence broadly but the normalisation reduction effect counters that effect when the will is spread across the states of a single system. Thus it is best to spread one's influence broadly but across many separate systems; when the influence is focused upon one state for each available system then there is maximal power/amplitude magnification and no normalisation reduction and therefore maximum effect.

This effect relates to systems that are composed of sub-systems. If a super-system is composed of two binary systems then it has collective states 00, 01, 10, 11. But if the super-system consists of three binary systems then it has 8 collective states ranging from 000 to 111.

The
number of collective states is c^{n} where n is the number of
sub-systems and c is the complexity of the sub-systems. For
simplicity I assume that all sub-systems have equal complexity but
the general argument applies to the more complex case as well.

Two
binary sub-systems leads to 2^{2} = 4 collective states.

Three
binary systems leads to 2^{3} = 8 collective states.

But
100 binary systems leads to 2^{100} = 1.27 x 10^{30}
collective states.

This
state space expansion occurs exponentially as the number of
sub-systems increases and it occurs faster for more complex
sub-systems. For example, if c=10 then thirty sub-systems results in
a state space of 10^{30}.

This state space expansion effect for systems with many sub-systems combines with the normalisation reduction effect to significantly reduce one's ability to influence complex systems.

This effect combined with the "Sub / Super System Experiment" discussed next indicates that there are massive gains in effectiveness on complex systems if one considers sub-systems directly rather than as a single collective system. But that gets harder to do as the system gets more complex; for example, one cannot visualise every atom in the planet. We need to utilise higher level conceptualisations of some kind, but it is best to keep the focus as low level as possible. We cannot always focus on the lowest level, but to jump straight to the whole system results in lost effectiveness so it is best to utilise intermediate sub-systems according to our ability to comprehend the complexity.

I showed above that in the case of a single system one has greatest effect when one focuses on a single outcome and does not spread one's will across several possible outcomes, so it helps to be specific. But what if there are multiple systems? How can one have maximum effect on the total system? How does the effect differ if a person wills upon particular sub-system states or upon the equivalent super-system state? How is the final effect altered by the system level at which one conceptualises the system? Is it better to be specific or general? Should one focus on a single point of control or on many points?

In this experiment a person wills upon a particular macroscopic state of a complex system and they separately will upon particular sub-system states that are equivalent to a macroscopic state. A mathematical model of this scenario is shown below.

Consider three binary systems (REG's), which can be represented by binary digits. They can be thought of as three separate binary systems or they can be formed into a three digit binary system that can take values in the range from (000 - 111) or (0 – 7) in decimal format.

The person is initially placed in front of three binary REG's and is given three binary digits indicating how they are to will upon the machines. Then in a second scenario they are placed in front of a single eight state REG machine and they are given a single decimal digit from 0 to 7 indicating how they are to will upon that machine. The eight state REG is formed out of the three binary REG's.

D =
0.14 ^{2} x 6 = 0.12

Thus willing upon the individual sub-systems is times more effective than willing upon the single super-system state. Thus to influence a complex system it is best to visualise as many of the subsystems as possible and for each of these to will for a particular state. Thus it is best to visualise completely and specifically.

How does it differ if one wills to inhibit a particular state or if one wills to accentuate all other states and thereby indirectly inhibits that state?

Consider a three state system with states labels by 0,1 & 2. A person can either will to accentuate states 0 & 1 or they can will to inhibit state 2.

OR

Thus it is times more effective to will upon a particular state than it is to will upon all other states and thereby indirectly influence the remaining state.

Consider a complex binary system such as a deck of 52 cards which can be either black or red. This can be thought of as 52 states or as two states (black or red).

If the person is willing for more black than red cards then:

Thus
p_{black} = 0.021 x 26 = 0.546

Whereas for a single binary system

Therefore the complexity of the system reduces one's ability to influence it and it would be 530 times more effective to visualise it as a simple black/red binary system.

Instead of just willing for more heads OR less tails, what happens if the person wills for more heads AND less tails?

And we previously calculated the case of willing just for more heads:

Therefore it is 1.7 times more effective to will upon both positive AND negative points of control rather than to just will for positive OR negative states.

The preceding discussion has introduced a mathematical model of the action of intentional influence or the use of will power to influence the state of a system. The model describes detailed relations between the quantities focus, force, will, will power, influence and effect.

This model gives rise to quantifiable and testable predictions regarding the degree of effect on systems in different scenarios. These could be experimentally tested using REG machines or other such experiments. If the experimental results confirm the theoretical predictions then this would lend credence to the model as a description of the underlying information theoretic processes that implement the phenomenon of intentional influence. In such a case the model would also give a quantifiable measure of will power, so one could use REG machines in conjunction with the model to measure an agent's will power. If the experimental results do not confirm the theoretical predictions the general approach exhibited here could be refined to produce variations that more closely match the experimental data. In this way I propose a possible method for the scientific analysis of will power and intentional influence.