Here is perhaps the most important fact about scientific method that anybody can ever learn: the optimal course of a scientific investigation is to
provide probability assignments for propositions about the universe, and when scientific method deviates from this optimum path, it is valid only to the
extent that it successfully approximates this ideal. There is a simple reason
for this:
We would love to be able
to say that we are 100% certain about X, that Y is guaranteed to be true, or
that fact Z about the universe has somehow entered my head and impressed infallible knowledge of its necessary truth on my mind, but of course, except for the most
trivial propositions, none of these is possible.
Secondly, and far more
fundamentally, calibration of any instrument requires certain symmetries of
physical law to be hypothesized. Here's what I mean:
If a 63.7 kg weight caused my machine to go 'beep' yesterday, I might postulate that a 63.7 kg weight will do the same today, because I'm assuming that the relevant laws of physics (and my device) are the same. If a mass on a spring oscillates at a given frequency, allowing me to count out a certain number of seconds, I might assume that changing my location on the surface of the Earth will not change that frequency. (And by the way, how would I know that the frequency was fixed at all?) These are hypotheses that may or may not be true. The only way to test such hypotheses is by the development of further instrumentation. Such instrumentation, though, is subject to a similar calibration problem, reliant on some other kind of analogical reasoning.
Until I realize that solid objects expand and contract as the average kinetic energy of their atoms changes, it might never occur to me that length measurements taken with a simple ruler vary slightly, but systematically, with the ambient temperature. Furthermore, in order to discern that such a bias is present, I need to make a comparison against some other calibrated standard. Such auxiliary standards, though, will always suffer the same type of vulnerability.
Thus, we cannot prove with 100% certainty exactly which symmetries hold in nature (though the assumption that some symmetries hold is a priori sound, which I might get round to in a future post). We can only demonstrate that up to now, our experiences are consistent with some set of postulated symmetries. The process of testing assumed principles of calibration (laws of physics) against empirical experience in this way is known as induction.
If a 63.7 kg weight caused my machine to go 'beep' yesterday, I might postulate that a 63.7 kg weight will do the same today, because I'm assuming that the relevant laws of physics (and my device) are the same. If a mass on a spring oscillates at a given frequency, allowing me to count out a certain number of seconds, I might assume that changing my location on the surface of the Earth will not change that frequency. (And by the way, how would I know that the frequency was fixed at all?) These are hypotheses that may or may not be true. The only way to test such hypotheses is by the development of further instrumentation. Such instrumentation, though, is subject to a similar calibration problem, reliant on some other kind of analogical reasoning.
Until I realize that solid objects expand and contract as the average kinetic energy of their atoms changes, it might never occur to me that length measurements taken with a simple ruler vary slightly, but systematically, with the ambient temperature. Furthermore, in order to discern that such a bias is present, I need to make a comparison against some other calibrated standard. Such auxiliary standards, though, will always suffer the same type of vulnerability.
Thus, we cannot prove with 100% certainty exactly which symmetries hold in nature (though the assumption that some symmetries hold is a priori sound, which I might get round to in a future post). We can only demonstrate that up to now, our experiences are consistent with some set of postulated symmetries. The process of testing assumed principles of calibration (laws of physics) against empirical experience in this way is known as induction.
So, faced with the
impossibility of knowing with absolute certainty any but the most trivial facts
about the world (e.g. that only things that exist are affected by gravity), we
must fall back on the next best thing: to quantify our justifiable degrees of
belief in the various propositions we are interested in. Prescribing the manner
in which this is achieved is the task of probability theory. Essentially, induction works by applying Bayes' theorem, or some reasonable approximation.
As for those very
trivial statements that we can deduce with total confidence, what do we get
from those? Nothing, really. Take a look at my example: only things that exist
are affected by gravity. Does this tell us anything about things that actually
exist? In fact, no. It only tells us about things that don't exist. This is
both why it is so trivial, and why it can be known without scrutinizing any evidence. Let's think about about one of the most famous examples, Descartes' cogito:
'I think, therefore I am.' Again, this doesn't really tell us anything. It
doesn't say what I am, what it is to think, or what it is to be, only that
thinking, like gravitational attraction, is a property limited to things that
exist. It doesn't even suggest, for example, that I am in any way a separate entity
to every other thinking object in the universe.
For completeness, there
is strictly no way out of the calibration problem. Consider the following
thought experiment: imagine for the sake of argument that at some time, some
clever scientist somehow devises an argument that identifies a unique set of
symmetries in physical law, such that all other possible sets of symmetries
lead to statements that are self contradicting, and therefore cannot be true.
Imagine that this miraculous argument is actually correct. Obviously, the validity of such
an argument, and our knowledge of that validity are not the same thing. This
latter relies on our ability to confidently check the required logic, implying
that there is yet another instrument in need of calibration: our own
intellectual faculties, the fidelity of which can not, by any possible means, be established a priori.
Inductive inference is often contrasted with deductive logic. Deductive logic performs trivial operations on assumed premises, to draw conclusions that, according to the system, can not be false if the premises are true. The classic example starts from two premises, (i) 'all men are mortal,' and (ii) 'Socrates is a man,' to reach the unavoidable result, 'Socrates is mortal'.
Some of the well-known philosophers of science believed that because inductively derived information is not capable of guaranteeing truth, it must be inferior to deductive logic, or worse (e.g. with Karl Popper) it must be strictly useless - another fine example of mind-projection fallacy: because a statement about reality can only be true or false, then any degree of belief in it that I possess must be all or nothing - a position I have refuted elsewhere. (Popper is best known for his falsifiability criterion - in Inductive inference or deductive falsification? I show that, contrary to Popper and others, falsification must be inductive, but it's also important to note that falsification is not the only direction in which science can progress.)
Deduction often feels far more steadfast than inductive inference, because of its power to guarantee the conclusion from the employed premises, but really, deduction on its own tells us absolutely nothing about the world. Because of the calibration problem, the premises of any useful deduction can not be guaranteed by any means. To put it another way, one may argue that mathematical theorems possess necessary truth, but to the extent that this is true, they apply only to abstract, mathematical objects, x's and y's, but not real entities inhabiting the universe.
Deduction often feels far more steadfast than inductive inference, because of its power to guarantee the conclusion from the employed premises, but really, deduction on its own tells us absolutely nothing about the world. Because of the calibration problem, the premises of any useful deduction can not be guaranteed by any means. To put it another way, one may argue that mathematical theorems possess necessary truth, but to the extent that this is true, they apply only to abstract, mathematical objects, x's and y's, but not real entities inhabiting the universe.
There may seem to be a problem, in that probability is a mathematical theory, meaning that all its theorems are derived using deductive logic. How can probabilistic reasoning be more powerful than deduction, if probability theory depends on deduction? It's what probability theory is about that allows it to lay legitimate claim to a uniquely privileged position among mathematical theories. The theory of differential calculus, for example, is a theory about x's and y's - entities with no real existence, not even in the mind of the person who fully perceives the theory. Using the theory of differential calculus, however, I can use those x's and y's to represent, for example, space and time, and formulate a theory of gravitation. We might start from Newton's inverse square law and use the theory to predict that the planets will adopt elliptical orbits around the sun, but could we then know with deductive certainty that this is the truth? Of course not. Could we even infer that this is probably the truth? No, not without probability theory.
Probability theory is
still a theory of abstract x's and y's, but the objects in this theory are now not surrogates for masses on springs or airoplane wings, they are rational agents and their rankings of believability. The theory of probability, therefore, provides a bridge between a mechanical theory and the thing that it is a theory of. It allows us - real agents - to quantify the correspondence between model and reality. On its own, a mechanical model, such as a theory of gravity, has no knowable relationship with what is actually going on. It is inductive inference that allows us to say, 'yes, the assumptions of this model are reasonable,' and 'yes, the predictions of this model match my experiences well.'
Finally, while it looks like inductive inference is founded on deductive logic, where do we suppose the axioms needed to derive our mathematical systems derive from? It is surely perverse to suggest that they come from anywhere other than our experience of the world, and what works, intellectually. Such experience is derived in 3 ways:
(i) our population-genetic history - our brains are the way the are, because the way they regulate our behaviour is a good match for the way the world operates, leading to efficient propagation of the genes that prescribe our brains' construction
(ii) our cultural history - early philosophers experimented with all manner of intellectual systems, eliminating all sorts of obvious mistakes along the way, and passing on a treasure trove of useful heuristics
(iii) our personal history - our direct contact with nature makes certain axiomatic systems feel highly unpalatable, because they just don't match what we see
To spell it out explicitly: deductive inference is in fact founded upon inductively derived principles.
The idea that inductive learning is more powerful than deductive logic has been recognized at least as far back as 1620, when probability theory was still in its infancy (just a few meager decades of faltering development). In that year, Francis Bacon, one of the founders of empirical scientific method, published his great work on the subject, 'Novum Organum,' (full text). This title means 'New Instrument,' and was a reference to Aristotle's 'Organon,' (full text). This was Aristotle's book on deductive logic, which had stood for centuries as the accepted model for all epistemology. Bacon's title was carefully chosen to send the message, "Aristotle is now obsolete." Bacon's great contribution was to say that deduction alone gets you nowhere. If you want to know what the world is actually made of, and how it behaves, he argued, you must make observations and do experiments. Science, in fact all knowledge, is based on experience, not pure thought.
Excellent post. Given, as you've argued, that science is inductive and that the proper way to update our beliefs about the world is via Bayesian confirmation theory, I was wondering if you've considered writing a post on whether Bayes can solve Hume's and Goodman's problems of induction. If it cannot, then science cannot be trusted to make predictions about future states, and it becomes a truly a rear-view mirror endeavor. Indeed, if they cannot be solved, then we cannot claim to have any real knowledge about the world at all...
ReplyDeleteThanks for your comment.
DeleteThe term 'knowledge,' often defined as 'justified true belief,' is highly problematic. The concept is hopelessly over simplistic, to the point of uselessness, and should be removed from all serious philosophical discussion.
It is true that inductive inference can not guarantee the truth of relationships between past, present, or future entities. (For the purposes of inference, there is no important difference between past and future.) But to say that science can't be trusted to make predictions about the future is over simplistic, and a stronger statement than saying that science can't guarantee our inferences. Lack of complete trust is no the same as complete lack of trust.
There is no solution to the problem of induction - no way to be 100% certain (independent of a probability model) of anything interesting. We can, however, apply arbitrarily many layers of sophistication in our pursuit of the truth (via model comparison), and thus examine our hypotheses with arbitrarily stringent tests.
It is thus possible to be highly confident of our inferences, and to say "given the information I have now, there is no good reason for acting as if X is not true."
Hume and Goodman's problems do not require us to be 100% certain in our inferences. The problems remain even if we consider our predictions to be highly probable, rather than certain (e.g., that we expect the sun to rise tomorrow with very high probability). It is always possible that Nature is going to 'go off the rails' in the next observation interval. In fact, if Hume and Goodman are correct, unless we presuppose the uniformity of Nature (even in the probabilistic sense), there is no way to assign a probability to the next observation. If this is the case, then we cannot use past scientific success to predict future observations, even probabilistically- which means that we cannot claim to know anything about the world beyond what has already been observed. Note that it is partly Hume's problem of induction which led Popper to reject inductivism in favor of deductivism. But as you rightly point out, given that falsificationism is also inductive, Popper's solution ultimately fails.
DeleteNow, I am not saying that Hume's and Goodman's problems cannot be solved via probabilistic reasoning a la Bayes; I'm just curious about your thoughts on what this solution might look like. How would Bayesian confirmation theory provide grounds for believing that all emeralds are 'green' rather than 'grue'?
Yes, it is always possible that nature will go 'off the rails,' but this does not invalidate any previous probability assignment. A probability assignment is not a statement only about some external state of the world. It is a statement about our relationship with that state of the world.
DeleteTwo rational agents, possessing exactly the same information, can legitimately arrive at two different probability assignments for the same proposition, if they use different probability models. This does not invalidate either probability assignment. The calculus of probability is impossible without such probability models, and there is no prior principle for choosing one over the other (assuming they are both mathematically valid - and even this criterion is problematic, under the closest scrutiny).
We can use past success to predict future events. This does not guarantee those predictions will be correct. (This is just as true, concerning inferences about the past.) We do need to assume some kind of uniformity to do this, but there is no way around this, and there is nothing to say against such a procedure. If we begin to suspect the exact form of our symmetry assumptions (probability models), we can test them by applying Bayes' theorem at a higher level.
What Hume and Popper both wanted was for our inferences to be absolutely guaranteed (independent of any vulnerable assumptions). There is no way to grant this wish, probabilistic or otherwise.