Seriously, why all this fuss about rationality and science, and all
that? Can we be just as happy, or even more so, being irrational, as by being
rational? Are there aspects of our lives where rationality doesn't help? Might
rationality actually be a danger?
Think for a moment about what it means to desire something.
To desire something trivially entails desiring an efficient means to
attain it. To desire X is to expect my life to be better if I add X to my
possessions. To desire X, and not to avail of a known opportunity to increase
my probability to add X to my possessions, therefore, is either (1) to do
something counter to my desires, or (2) to desire my life not to get better.
Number (2) is strictly impossible – a better life for me is, by definition, one in which more of my desires are fulfilled. Number (1) is incoherent
– there can be no motivation for anyone to do anything against their own
interests. Behaviour mode (1) is not impossible, but it can only be the result
of a malfunction.
Let’s consider some complicating circumstances to check the robustness
of this.
- Suppose I desire a cigarette. Not to smoke a cigarette, however, is clearly in my interests. There is no contradiction here. Besides (hypothetically) wanting to smoke something, I also have other goals, such as a long healthy life, which are of greater importance to me. To desire a cigarette is to be aware of part of my mind that mistakenly thinks this will make my life better, even though in expectation, it will not. This is not really an example of desiring what I do not desire, because a few puffs of nicotine is not my highest desire – when all desires that can be compared on the same dimension are accounted for, the net outcome is what counts. Neither is it an example of acting against my desires if I turn down the offer of a smoke, for the same reason.
- Suppose I desire to reach the top of a mountain, but I refuse to take the cable car that conveniently departs every 30 minutes, preferring instead to scale the steep and difficult cliffs by hand and foot. Simplistically, this looks like genuinely desiring to not avail of an efficient means to attain my desires, but in reality, it is clearly the case that reaching the summit is only part of the goal, another part being the pleasure derived from the challenging method of getting there.
Despite complications arising from the inner structure of our desires,
therefore, for me to knowingly refuse to adopt behaviour that would increase my
probability to fulfill my desires is undeniably undesirable. Now, behavior that
we know increases our chances to get what we desire has certain general
features. For example, it requires an ability to accumulate reliable
information about the world. It is not satisfactory to take a wild guess at the
best course of action, and just hope that it works. This might work, but it will not work reliably. My rational
expectation to achieve my goal is no better than if I do nothing. Reliability
begins to enter the picture when I can make informed guesses. I must be able to
make reliable predictions about what will happen as a result of my actions, and
to make these predictions, I need a model of reality with some fidelity. Not just fidelity,
but known fidelity - to increase the probability to achieve my goals, I need a
strategy that I have good reasons to trust.
It happens that there is a procedure capable of supplying the kinds of
reliable information and models of reality that enable the kinds of predictions
we desire to make, in the pursuit of our desires. Furthermore, we all know what
it is. It is called scientific method. Remember the reliability criterion? This
is what makes science scientific. The gold standard for assessing the
reliability of a proposition about the real world is probability theory – a
kind of reasoning from empirical experience. Thus the ability of science to say
anything worthwhile about the structure of reality comes from its application
of probability theory or any of several approximations that are demonstrably
good in certain special cases. If there is something that is better than
today’s science, then better is the result of a favorable outcome under
probabilistic analysis (since 'better' implies 'reliably better'), thus, whatever it is, it is tomorrow’s science.
So, if I desire a thing, then I desire a means to maximize my
expectation to get it, so I desire a means to make reliable predictions of the
outcomes of my actions, meaning that I desire a model of the world in which I
can justifiably invest a high level of belief, thus I desire to employ
scientific method, the set of procedures best qualified to identify reliable
propositions about reality. Therefore, rationality is desirable. Full stop.
We cannot expect to be as happy by being irrational as by being
rational. We might be lucky, but by definition, we cannot rely on luck, and our
desires entail also desiring reliable strategies.
Items (A) to (D), below, detail some subtleties related to these
conclusions.
(A) Where’s the fun in that?
Seriously? Being rational is always desirable? Seems like an awfully
dry, humorless existence, always having to consult a set of equations before
deciding what to do!
What this objection amounts to is another example (ii), from above,
where the climber chooses to take the difficult route to the top of the
mountain. What is really meant by a dry existence is something like elimination
of pleasant surprises, spontaneity, and ad-hoc creativity, and that these
things are actually part of what we value.
Of course, there are also unpleasant surprises possible, and we do value
minimizing those. The capacity to increase the frequency of pleasant surprises,
while not dangerously exposing ourselves to trouble is something that, of
course, is best delivered through being rational. Being in a contained way irrational may be one of
our goals, but as always, the best way to achieve this is by being rational
about it. (I won’t have much opportunity to continue my pursuit of
irrationality tomorrow, if I die recklessly today.)
(B) Sophistication effect
To be rational (and thus make maximal use of scientific method, as
required by a coherent pursuit of our desires) means to make study of likely failure
modes of human reasoning (if you are human). This reduces the probability of
committing fallacies of reasoning yourself, thus increasing the probability
that your model of reality is correct. But, there is a recognized failure mode
of human reasoning that actually results from increased awareness of failure
modes of reasoning. It goes like this: knowing many of the mechanisms by which
seemingly intelligent people can be misled by their own flawed heuristic
reasoning methods makes it easy for me for hypothesize reasons to ignore good
evidence, when it supports a proposition that I don’t like – “Oh sure, he says
he has seen 20 cases of X and no cases of Y, but that’s probably a confirmation bias.”
Does this undermine my argument? Not at all. This is not really a
danger of rationality. If anything, it is a danger of education (though one
that I confidently predict that a rational analysis will reveal to be not
sufficient to argue for reduced education). What has happened, in the above
example is of course itself a form of flawed reasoning, it is reasoning based
on what I desire to be true, and thus isn't rational. It may be a pursuit of
rationality that led me to reason in this way, but this is only because my
quest has been (hopefully temporarily) derailed. Thus my desire to be rational (entailed
trivially by my possession of desire for anything) makes it often desirable for me to have the support of like-minded rational people, capable of pointing out the
error, when even the honest quest for reliable information leads me into a trap
of fallacious inference.
(C) Where does it stop?
The assessment of probability is open ended. If there is anything
about probability theory that sucks, this is it, but no matter how brilliant
the minds that come to work on this problem, no way around it can ever be
found, in principle. It is just something we have to live with - pretending it's not there won't make it go away. What it means,
though, is that no probability can be divorced from the model within which it
is calculated. There is always a possibility that my hypothesis space does not
contain a true hypothesis. For example, I can use probability theory to determine the most likely coefficients, A and B, in a linear model used to fit some data, but investigation of the linear model will say nothing about other
possible fitting functions. I can repeat a similar analysis using say a
three-parameter quadratic fit, and then decide which fitting model is the most
likely using Ockham’s razor, but then what about some third candidate?
Or what if the Gaussian noise model I used in my
assessment of the fits is wrong? What if I suspect that some of the measurements
in my data set are flawed? Perhaps the whole experiment was just a dream. These things can all be checked in essentially the
same way as all the previously considered possibilities (using probability
theory), but it is quite clear that the process can continue indefinitely.
Rationality is thus a slippery concept: how much does it take to be
rational? Since the underlying procedure of rationality, the calculation of
probabilities, can always be improved by adding another level, won’t it go on
forever, precluding the possibility to ever reach a decision?
To answer this, let us note that to execute a calculation capable of
deciding how to achieve maximal happiness and prosperity for all of humanity
and all other life on Earth is not a rational thing to do if the calculation is
so costly that its completion results in the immediate extinction of all humanity and all
other life on Earth.
Rationality is necessarily a reflexive process, both (as described
above) in that it requires analysis of the potential failure modes of the
particular hardware/software combination being utilized (awareness of cognitive
biases), and in that it must try to monitor its own cost. Recall that
rationality owes its ultimate justification to the fulfillment of desires. These
desires necessarily supersede the desire to be rational itself. An algorithm
designed to do nothing other than be rational would do literally nothing -
so without a higher goal above it, rationality is
literally nothing.
Thus, if the cost of the chosen rational procedure is expected to
prevent the necessarily higher-level desire being fulfilled, then rationality
dictates that the calculation be stopped (or better, not started). Furthermore, the (necessary) desire
to employ a procedure that doesn't diminish the likelihood to achieve the
highest goals entails a procedure capable of assessing and flagging when such
an occurrence is likely.
(D) Going with your gut feeling
On a related issue, concerning again the contingency (the lack of
guarantee that the hypothesis space actually contains a true hypothesis) and
potential difficulty of a rational calculation, do we need to worry that the
possible computational difficulty and, ultimately, the possibility that we will
be wrong in the end will make rationality uncompetitive with our innate
capabilities of judgment? Only in a very limited sense.
Yes, we have superbly adapted computational organs, with efficiencies
far exceeding any artificial hardware that we can so far devise, and capable of
solving problems vastly more difficult than any rigorous probability-crunching
machine that we can now build. And yes, it probably is rational under many
circumstances to favor the rough and ready output of somebody’s bias-ridden
squishy brain over the hassle of a near-impossible, but oh-so rigorous
calculation. But under what circumstances? Either, as noted, when the cost of
the calculation prohibits the attainment of the ultimate goal, or when
rationally evaluated empirical evidence indicates that it is probably safe to
do so.
Human brain function is at least partially rational, after all. Our
brains are adapted for and (I am highly justified in believing) quite
successful at making self-serving judgments, which, as noted, is founded upon
an ability to form a reliable impression of the workings of our environment. And,
as also noted, the degree of rigor called for in any rational calculation is
determined by the costs of the possible calculations, the costs of not doing
the calculations, and the amount we expect to gain from them.
This is not to downplay the importance of scientific method. Let me
emphasize: a reliable estimate of when it is acceptable to rely on heuristics,
rather than full-blown analysis, can only come from a rational procedure. The list of known cognitive biases that interfere with sound reasoning is unfortunately rather extensive, and presumably still growing. The science informs us that rather often, our innate judgement exhibits significantly less success than rational procedure.