I
often marvel at the achievements of the early scientific pioneers, the
Galileos, the Newtons, and the like. Their degree of understanding would have been extraordinary under any circumstances, but as if it wasn't hard enough, they had almost no
technical vocabulary to build their ideas from. They had to develop the
vocabulary themselves. How did they even know what to think, without a
theoretical framework already in place? Amazing. But at other times, I
wonder if their situation was not one of incredible intellectual
liberty, almost entirely unchained by technical jargon and untrammelled by rigorous notation. Perhaps it was a slight advantage for them, not
to have those vast regions of concept space effectively cut off from
possible exploration by the focusing effects of a mature scientific
language. Standardized scientific language may or may not limit the ease with which novel ideas are explored, but I think there are strong grounds for believing that jargon can actively inhibit comprehension of communicated ideas, as I now want to explore.
Its certainly true that beyond a certain elementary point, scientific progress, or any kind of intellectual advance, is severely hindered without the existence of a robust technical vocabulary, but we should not conflate the proliferation of jargon with the advance of understanding. Standardized terminology is vital for ‘high-level’ thought and debate, but all too often, we seem to see this terminology as an indicator of technical progress or sophisticated thought, when it is the content of ideas we should be examining for such indications.
There is a common con trick, one is almost expected to use in order to advance one’s self, which consists of enhancing credibility by expanding the number of words one uses and the complexity of the phrases they are fitted into. It seems as though one is trying to create the illusion of intellectual rigour and content, and perhaps it’s not a bad guess to suggest that jargon proliferates most wildly where intellectual rigour is least supported by the content of the ideas being expressed. Richard Dawkins relates somewhere (possibly in ‘Unweaving the rainbow’) a story of a post-modernist philosopher who gave a talk, and in reply to a questioner who said that he wasn’t able to understand some point, said ‘oh, thank you very much.’ It suggests that the content of the idea was not important, otherwise the speaker would certainly have been unhappy that it was not understandable. Instead, it was the level of difficulty of the language that gave the talk its merit.
It has been shown experimentally that adding vacuous additional words can have a powerful psychological effect. Ellen Langer’s famous study1, for example, consisted of approaching people in the middle of a photocopying job, and asking to butt in. If the experimenter (blinded to the purpose of the experiment) said “Excuse me, I have 5 pages. May I use the xerox machine?,” a modest majority of people let her (60%), but if she said “Excuse me, I have 5 pages. May I use the xerox machine, because I have to make copies?” the number of people persuaded to step aside was much greater (93%). This shows clearly how words that add zero information can greatly enhance credibility - an effect that is exploited much too often, and not just by charmers, business people, sports commentators, and post-modernists, but by scientists as well. The other day I was reading an academic article on hyperspectral imaging, a phrase that made me uneasy - I wondered what it was - until I realised that ‘hyperspectral imaging’ is exactly that same thing as, yup, ‘spectral imaging.’
Its certainly true that beyond a certain elementary point, scientific progress, or any kind of intellectual advance, is severely hindered without the existence of a robust technical vocabulary, but we should not conflate the proliferation of jargon with the advance of understanding. Standardized terminology is vital for ‘high-level’ thought and debate, but all too often, we seem to see this terminology as an indicator of technical progress or sophisticated thought, when it is the content of ideas we should be examining for such indications.
There is a common con trick, one is almost expected to use in order to advance one’s self, which consists of enhancing credibility by expanding the number of words one uses and the complexity of the phrases they are fitted into. It seems as though one is trying to create the illusion of intellectual rigour and content, and perhaps it’s not a bad guess to suggest that jargon proliferates most wildly where intellectual rigour is least supported by the content of the ideas being expressed. Richard Dawkins relates somewhere (possibly in ‘Unweaving the rainbow’) a story of a post-modernist philosopher who gave a talk, and in reply to a questioner who said that he wasn’t able to understand some point, said ‘oh, thank you very much.’ It suggests that the content of the idea was not important, otherwise the speaker would certainly have been unhappy that it was not understandable. Instead, it was the level of difficulty of the language that gave the talk its merit.
It has been shown experimentally that adding vacuous additional words can have a powerful psychological effect. Ellen Langer’s famous study1, for example, consisted of approaching people in the middle of a photocopying job, and asking to butt in. If the experimenter (blinded to the purpose of the experiment) said “Excuse me, I have 5 pages. May I use the xerox machine?,” a modest majority of people let her (60%), but if she said “Excuse me, I have 5 pages. May I use the xerox machine, because I have to make copies?” the number of people persuaded to step aside was much greater (93%). This shows clearly how words that add zero information can greatly enhance credibility - an effect that is exploited much too often, and not just by charmers, business people, sports commentators, and post-modernists, but by scientists as well. The other day I was reading an academic article on hyperspectral imaging, a phrase that made me uneasy - I wondered what it was - until I realised that ‘hyperspectral imaging’ is exactly that same thing as, yup, ‘spectral imaging.’
Even
if we have excised the redundancy from jargon-rich language, I often
suspect that technical jargon can actually impede understanding. Just as
unnecessary multiplicity of terms can enhance credibility at the
photocopier, I suspect that recognition of familiar jargon gives one an
easy feeling which is too often confused for comprehension. You can test
this with skilled scientists, by tinkering just a little bit with their
beloved terminology, and observing their often blank or slightly
panicked expressions. Once, when preparing a manuscript on the lifetimes
of charged particles in semiconductors (the lifetime is similar to the
half life in radioactivity), in one place I substituted ‘lifetime’ with
the phrase ‘survival time.’ When I showed the text to a close colleague
(and far better experimentalist than me) for comments, he was very
uncomfortable with this tiny change. He seemed unable to relate this new
phrase to his established technical lexicon.
You
might think that this uneasiness is due to the need for each scientific
term to be rigorously defined and used precisely, but its not.
Scientists mix up their jargon all the time quite freely, and without
anybody batting an eyelid most of the time. I have read, for example, an
extremely technical textbook in which an expert author copiously uses
the term ‘cross-section’ (something related to a particle’s
interactability, and necessarily with units of area) in place of
frequency, reaction probability, lifetime, mean free path, and a whole
host of concepts, all somewhat related to the tendency of a pair of
particles to bump into each other. Nobody minds (except for grumpy arses
like me), simply because the word is familiar in the context.
Tversky and Kahneman have provided what I interpret as strong experimental evidence2
for my theory that jargon substitutes familiarity for comprehension.
Two groups of study participants were asked to estimate a couple of
probable outcomes from some imaginary health survey. One group was asked
two questions in the form ‘what percentage of survey participants do
you think had had heart attacks?’ and ‘what percentage of the survey
participants were over 55 and had had heart attacks?’ By simple logic,
the latter percentage can not be larger than the first as ‘over 55 and
has had a heart attack’ is a subset of ’has had a heart attack,’ but 65%
of subjects estimated the latter percentage as the larger. This is
called the conjunction fallacy. Apparently, the greater detail, all
parts of which sit comfortably together, creates a false sense of
psychological coherence that messes with our ability gauge probabilities
properly.
The
other group was asked the same questions but worded differently: ‘out
of a hundred survey participants, how many do you think had had heart
attacks, how many do think were over 55 and had had heart attacks?’
Subjects in the second group turned out to be much less likely to commit
the conjunction fallacy, only 25% this time. This seems to me to show
that many people can comfortably use a technical word, such as
‘percentage’, almost every day, without ever forming a clear idea in
their heads of what it means. If the people asked to think in terms of
percentages had properly examined the meaning of the word, they would
have necessarily found themselves answering exactly the same question as
the subjects in the other group, and there should have been no
difference between the two groups’ abilities to reason correctly. Having
this familiar word, ‘percentage,’ which everyone recognizes instantly,
seems to stand in the way of a full comprehension of the question being
asked. Over reliance on technical jargon actually does impede
understanding of technical concepts. This seems to be particularly true
when familiar abstract ideas are not deliberately translated into the
concrete realm.
When
I read a piece of technical literature, I have a deliberate policy with
regard to jargon that greatly enhances my comprehension. As with the
‘hyperspectral imaging’ example, redundancy upsets me, so I mentally
remove it, allowing myself to focus on the actual (uncrowded)
information content. In this case, I actually had to perform a quick
internet search to convince myself that the ‘hyper’ bit really was just
hype, before I could comfortably continue reading. Once all the
unnecessary words have been removed, I typically reread each difficult
or important sentence, with technical terms mentally replaced with
synonyms. This forces me to think beyond the mere recognition of
beguiling catchphrases, and coerces an explicit relation of the abstract
to the real. Its only after I can make sense of the text with the
jargon tinkered with in this way that I feel my understanding is at an
acceptable level. And if I can’t understand it after this exercise, then
I have the advantage of knowing it.
For writers, I wonder if there is some profit to be had, in terms of depth of appreciation, by occasionally using terms that are unfamiliar in the given context. The odd wacky metaphor might be just the thing fire up the reader's sparkle circuits.
For writers, I wonder if there is some profit to be had, in terms of depth of appreciation, by occasionally using terms that are unfamiliar in the given context. The odd wacky metaphor might be just the thing fire up the reader's sparkle circuits.
[1] | The Mindlessness of Ostensibly Thoughtful Action: The Role of "Placebic" Information in Interpersonal Interaction, Langer E., Blank A., and Chanowitz B., Journal of Personality and Social Psychology, 1978, Vol. 36, No. 6, Pages 635-42 (Sorry, the link is paywalled.) |
[2] | Extension versus intuitive reasoning: The conjunction fallacy in probability judgment, Tversky, A., and Kahneman, D., Psychological Review, 1983, Vol. 90, No. 4, Pages 293–315 |