There is an
argument that I have noted a couple of times when scientific colleagues with
religious beliefs have tried to explain to me how they reconcile these
seemingly contradictory things. Reality, they say, is divided into two classes
of phenomena: the natural and the supernatural. Natural phenomena, they say,
are the things that fall into the scope of science, while the supernatural lies
outside of science’s grasp, and can not be addressed by rational investigation.
This is completely muddle-headed, and seems to me to be based on an example of
something called the mind projection fallacy.
A similar
argument also crops up occasionally when advocates of alternative medicine try
to rationalize the complete failure of their favorite pseudoscientific therapy
to provide any evidence of efficacy in rigorous trials.
The very word
‘supernatural,’ at its heart, though, is one of those utterly self-defeating
terms, like ‘free will’ and ‘alternative medicine,’ completely devoid of
meaning and philosophically bankrupt. What is this free will that people keep
going on about? Is it the freedom to break the laws of physics? No, and since
every particle in your brain obeys the laws of physics, you are not free to
make non-mechanistic decisions, so can we please shut up about free will?
(Granted, I am using a restricted meaning of the term ‘free will.’)
And what is
alternative medicine? Medicine is the use of interventions that are known to
work in order to lessen the effects of disease. If it doesn’t work, or is not known
to work, then its not medicine, full stop. There is no alternative. Lets please
shut about alternative medicine.
What is the
supernatural? Nature is by definition everything that exists and happens. What
is outside nature is therefore necessarily an empty set.
The etymology of
the word ‘supernatural’ is the result of an error of thinking. This error is
the mind projection fallacy: falsely assuming that the properties of one’s
model of reality necessarily exhibit correspondence with the actual properties
of reality. The following dictionary definition of ‘supernatural’ was quoted to
me in a recent discussion of the term (reportedly from Webster’s):
Supernatural. [adjective:] 1. of,
pertaining to, or being ‘above or beyond what is natural or explainable by natural law’. 2. of, pertaining to, or attributed to
God or a deity. 3. of a superlative degree; preternatural. 4. pertaining to or
attributed to ghosts, goblins, or other unearthly beings; eerie; occult.
Number 1, is
where we have to focus. Numbers 2 and 4 are, in origin at least, based on
erroneous application of number 1, while number 3 is just weird. ‘Of a
superlative degree’? That’s not supernatural, by any reasonable standard.
‘Preturnatural’? This word has two meanings (according to Dictionary.com): one
is ‘supernatural’ (wow, that’s helpful) and the other is ‘exceptional or
abnormal.’ Finding a hundred euros on the pavement would be both exceptional
and abnormal, but again, not supernatural unless we are willing to debase the
meanings of words to the level of uselessness.
So what about
the primary meaning of supernatural, ‘above or beyond what is natural or
explainable by natural law.’ The first part poses a problem, since there is no
supplied procedure for determining what is natural, other than the obvious
definition: ‘whatever is not supernatural.’ Now I’m aware that all word
definitions are ultimately circular, but this is a case where the radius of
curvature is clearly far to small to represent any useful addition to the
language. The second part stipulates ‘explainable by natural law,’ which
succumbs to exactly the same objection, but I strongly suspect that many people
have failed to see this exactly because they have committed the mind projection
fallacy – in this case, conflation of natural law with our description of it.
Natural law is the set of principles, whatever they may be, that determine how
real phenomena evolve. If a phenomenon is real, then it would be explainable by
natural law. If a phenomenon is not real, then what is the point in debating
whether or not it is supernatural? I feel, however, that too many people think
that natural law is some set of equations, like E = mc2, written
down in text books – but this is merely our model of natural law. I see no
other convincing way to account for the appearance of this phrase in the quoted
dictionary definition, than to assume that natural law is being commonly
confused with known science in this way, since there seems to be no other
good reason to postulate that a phenomenon is not governed by natural law (I
know, the exact word was ‘explainable,’ but I think it is hard to rescue the
situation by invoking this subtle difference).
By this common
understanding of ‘supernatural,’ the photoelectric effect would have been
supernatural in and prior to 1904, but natural before the end of 1905. A
strange state of affairs, you might think.
Of course, word
meanings don’t have to stick exactly to their original literal meanings, and
anybody is free to apply the word ‘supernatural’ to any putative phenomenon
they wish: gods, ghosts, whatever (as long as they are clear in what they are
doing), but I argue firstly, that this is a misnomer, as nothing can be
literally beyond nature (supernatural) and secondly, that use of this misguided
word leads to horrendous confusions, such as those allowing highly educated and
otherwise rational people to claim that religious phenomena (or homeopathy or chi) are by definition
supernatural, and therefore by definition beyond the scrutiny of science.
The mind
projection fallacy also raises its head in science, all too often, such as in
quantum physics, and, in my opinion, in thermodynamics. It has also had very
substantial consequences in the development and application of probability
theory. Since scientific method generally strives to avoid fallacious
reasoning, I feel that it is important to get well acquainted with this
particular mental glitch, and to recognize some of the fields in which it still
extends a corrupting influence.
Looking at
thermodynamics, the famous second law, stating that the entropy of a closed
system tends to increase, is often explained by the experts as resulting from
our inability to distinguish between individual molecules (or other particles).
My conviction, however, is that the mechanical evolution of an ensemble of such
particles is unchanged if we are granted a means to identify them after they
have evolved. The real reason for
the second law seems to be that the proportion of possible initial microstates
that result in non-increased entropy is very tiny (and such states appear,
therefore with vanishing probability), but polluted with the standard language
of the discipline, one can find it hard to grasp this. Why is this standard
language an example of the mind projection fallacy? Because there is something
unknown to us (the identities of the particles), and the fact of it being
unknown is attributed as the cause of the physical evolution of the system, and
therefore a physical property of the system. It is not, though, it is a
property of our knowledge of the system.
With regard to
quantum mechanics, I am open minded on the matter of whether or not nature
evolves deterministically or non-deterministically. As I try to be a good
scientist, I wait for the evidence to favour one hypothesis strongly over the
other before casting my judgement. As much as we know about quantum mechanics
already, that evidence is not yet in. I am, however, highly uncomfortable and
skeptical about the possibility of something operating without a causal
mechanism, yet exhibiting clear tendencies. It seems I am not alone in this, as
several other well regarded thinkers have apparently shared this view, notably
among them, the exceptional theoretical physicists Albert Einstein, Louis de
Broglie, David Bohm, John Bell, and Edwin Jaynes. Imagine my delight,
therefore, when I discovered the following passage by Jaynes in a book of
conference proceedings1, articulating magnificently (and far better
than I ever could) many of my own long-felt misgivings about the language of
quantum mechanics:
The current literature on quantum theory is saturated with the Mind
Projection Fallacy. Many of us were first told, as undergraduates, about Bose
and Fermi statistics by an argument like this: “You and I cannot distinguish
between the particles; therefore the particles behave differently than if we
could.” Or the mysteries of the uncertainty principle were explained to us
thus: “The momentum of the particle is unknown; therefore it has a high kinetic
energy.” A standard of logic that would be considered a psychiatric disorder in
other fields, is the accepted norm in quantum theory. But this is really a form
of arrogance, as if one were claiming to control Nature by psychokinesis.
Whether or not
the position and momentum of a particle (as related in the most familiar
version of the Heisenberg principle) are truly ‘undetermined’ or merely
unknowable to us, I am unsure, but there is a commonly encountered assumption
that these two possibilities must be the same, and this results
from the mind projection fallacy. It is indeed a mighty challenge to reconcile
quantum phenomena with a fully deterministic mechanics. Some have succeeded,
but it remains a challenge to pin down whether or not this is nature’s way.
Many, however, follow Bohr and assert that there can be no underlying mechanism
behind quantum phenomena. Let me quote Jaynes again (this time from
‘Probability theory: the logic of science’):
the ‘central dogma’ [of quantum theory]… draws the conclusion that
belief in causes, and searching for them, is philosophically naïve, If
everybody accepted this and abided by it, no further advances in understanding
of physical law would ever be made… it seems to us that this attitude places a
premium on stupidity.
The field in
which the mind projection fallacy has had its most significant practical
consequences is perhaps probability theory, which is a colossal shame, as
probability is the king of theories: the meta-theory that decides how all other
theories are obtained.
If you scan
through the articles I have posted here on probability, you’ll observe that most
if not all make use of Bayes’ theorem. It is an incredibly important and useful
part of statistical reasoning, and represents the core of how human knowledge
advances. It is also derived simply, as a trivial rearrangement of two of the
most basic principles of probability theory: the product and sum rules. Yet,
for a significant portion of the 20th century, when statistical
theory was undergoing explosive development, Bayes’ theorem was rejected by the
majority of authorities and practitioners in the field. How on Earth could this
have come about? The mind projection fallacy, of course.
Because the
theory models real phenomena in terms of probabilities, it was assumed that
these probabilities must be real properties of the phenomena. Yet Bayes’
theorem converts a prior probability into a posterior probability by the
addition of mere information. And since merely changing the amount of
information cannot affect the physical properties of a system, then Bayes’
theorem must simply be wrong. QED.
The property
that probability was thought to correspond to was frequency. For example, a
coin has a 50% probability to land heads up because the relative frequency with
which it does so is one half. For this reason, the orthodox school of
statistical thought has become known as frequentist statistics.
One of the most
extraordinary scientists of the 20th century, Ronald Fisher, for
example, was one of the people who dominated the development of statistical
theory during his lifetime. In his highly influential book, ‘The design of
experiments,’2 he gave three reasons for rejecting Bayes’ theorem,
foremost of which is:
… advocates of inverse probability [Bayes’ theorem] seem forced to
regard probability not as an objective quantity measured by observable
frequencies….
Clearly, he
meant that the impossibility to reconcile Bayes’ theorem with the view of
probability as a physical property of real objects (the frequencies with which
different events occur) made it impossible to accept the theorem. (His other
two reasons are just as bad.) It was Fisher’s deeply held objection to the
logical foundations of probability theory that led him to do some of the most
important work developing and popularizing the frequentist significance tests,
which, as I have argued in detail here and here, are a poor method for assessing data.
Another
influential textbook, by Harald Cramér3, asserts that ‘any random variable
has a unique probability distribution.’ Again, assuming that the probability is
something objective and immutable, a physical property. The randomness is
assumed to be necessarily a property of the system under study, rather than a
statement of our lack of information – our inability to predict it before hand.
To instantly recognize the ridiculousness of both Fisher’s and Cramér’s views,
consider that I have just tossed a coin, which has landed, and I am asking you
to assess the probability that the face of the coin pointing up is the one
depicting the head: your only rational answer is 0.5, and it is the correct
answer, for you. For me though, the correct
answer is 1, because I am looking at the coin, and I can see the head facing
up. Same physical system, different probabilities, dependent on the available
information.
On the subject
of probabilities as physical properties of the systems we study, I can again
quote Jaynes, who has summarized the situation beautifully:
It is
therefore illogical to speak of verifying [the Bernoulli urn rule, a law for
determining probabilities] by performing experiments with the urn; that would
be like trying to verify a boy’s love for his dog by performing experiments on
the dog.
We can easily
identify other instances of the mind projection fallacy in probability
reasoning, some of which I have already discussed in earlier posts. For
example, the error of thinking discussed in Logical v’s Causal Dependence, consisting of the belief that the expression P(A|B) can only be different from
P(A) if B exerts a causal effect on A (an error that has made it into a number
of influential textbooks on statistical mechanics) seems to arise from the
conviction that a probability is an objective property of the system under
study. If B changes the probability for A, then according to this belief, B
changes the physical properties of A, and must therefore be, at least
partially, the cause of A.
Another instance
is to be found in The Raven Paradox, and consists of the belief that whether
or not a particular piece of evidence supports a hypothesis is an objective
property of the hypothesis, or the real system to which the hypothesis relates.
In that post, we examined the supposition that observation of a sequence of
exclusively black ravens supports the hypothesis that all ravens are black. We
discovered an instance where such observations actually support the opposite
hypothesis, illustrating that the relationship between the hypothesis and the
data is entirely dependent on the model we chose. To think otherwise was shown
to lead to disturbing and indefensible conclusions about ravens.
[1] 'Maximum Entropy and Bayesian Methods,' edited by J. Skilling, Kluwer Publishing, 1989
[2] 'The Design of Experiements,' R.A. Fisher, Oliver and Boyd, 1935
[3] 'Mathematical Methods of Statistics,' H. Cramér, Princeton University Press, 1946