Showing posts with label Words. Show all posts
Showing posts with label Words. Show all posts

Saturday, September 2, 2017

Disruptive Writing Style

In the pursuit of science, under whose umbrella I consider all intellectually rigorous activity to fall, the formulation and communication of ideas are critical. Here, I'll outline aspects of my own attitude to scientific communication.

In an earlier post on jargon, I advocated against over reliance on familiar terminology, as this can often give a false impression of understanding. I recommended occasionally throwing out unusual pieces of vocabulary, in the hope of ensuring that one's audience is engaged in the concepts, and not just semi-consciously following the signposts. This technique is a significant part of a communication strategy we might call 'disruptive writing style.'

When I say disruptive, I'm not talking about the content of an essay, article, speech, or whatever. I don't mean that the matter under discussion is disruptive, the way the birth of digital electronics represented a disruptive technology. Instead, I'm talking about the vehicle by which one conveys one's ideas to a wider appreciation. I'm talking about a style that occasionally strives to prevent the smooth progress of the reader (or listener) from beginning to end of your piece, in order to ensure that your thesis is being taken in.

Saturday, July 20, 2013

Greatness, By Definition



The goals for this blog have always been two-fold: (1) to bring students and professionals in the sciences into close acquaintance with centrally important topics in Bayesian statistics and rational scientific method, and (2) to bring the universal scope and beauty of science to the awareness of as many as possible, both within and outside the field - if you have any kind of problem in the real world, then science is the tool for you.

Effective communication to scientists, though, runs the risk of being impenetrable for non-scientists, while my efforts to simplify make me feel that the mathematically adept reader will be quickly bored.

Helping to make the material on the blog more accessible, therefore, and as part of a very, very slow but steady plan to achieve world domination, here are two new resources I've put together, which I am very happy to announce:

  • Glossary - definitions of technical terms used on the blog
  • Mathematical Resource - a set of links explaining the basics of probability from the beginning. Blog articles and glossary entries are linked in a logical order, starting from zero assumed prior expertise.

Both of these now appear in the links list on the right-hand sidebar.

The new resources are partially complete. Some of the names of entries in the glossary, for example, do not yet correspond to existing entries. Regular updates are planned.

The mathematical resource is a near-instantaneous extension of the material compiled for the glossary, and is actually my glacially slow response to a highly useful suggestion made by Richard Carrier, almost one year ago. The material has been organized in what seems to me to be a logical order, and for those interested, may be viewed as a short course in statistics, delivering, I hope, real practical skills in the topic. Its main purpose, though, is to provide an entry point for those interested in the blog, but unfamiliar with some of the important technical concepts.

The glossary may also be useful to those already familiar with the topics. Terms are used on the blog, for example, with meanings different to those of many other authors. Hypothesis testing is one such case, limited by some to denoting frequentist tests of significance, but used here to refer more generally to any  ranking of the reliability of propositions. 

The new glossary, then, is an attempt to rationalize the terminology, bringing the vocabulary back in line with what it was always intended to mean, not to reflect some flawed surrogates for those original intentions. For the same reason, in some cases alternate terms are used, such as 'falsifiability principle', in preference to the more common 'falsification principle'.

Important distinctions are also highlighted. Morality, for example is differentiated from moral fact. Philosophers are found to be distinct from 'nominal philosophers'. Equally importantly, science is explicitly distanced from 'the activity of scientists'. As a result, morality, philosophy, and science are found to be different words for exactly the same phenomenon.

In a previous article, I warned against excessive reliance on jargon, so it's perhaps worth explaining how the current initiative is not hypocrisy. That article was concerned with over use of unnecessary jargon, which often serves as an impediment to understanding. Symptoms of this include (1) jargon terms replaced with direct (but less familiar) synonyms result in confusion, and (2) vocabulary replaced with familiar terms with inapplicable meanings goes unnoticed. By providing a precise lexicon, we can help to prevent exactly these problems, and others, thus whetting the edge of our analytical blade, and accelerating our philosophical progress.

On a closely related topic, there is a fashionable notion going along the lines that to argue from the definition of words is fallacious, as if definitions of words are useless. This is not correct: argument from definition is a valid form of reasoning, but one that is very commonly misused.

Eliezer Yudkowsky's highly recommendable sequence, A Human's Guide to Words, covers the fallacious application very well. His principal example is the ancient riddle: if a tree falls in a forest, where nobody is present to hear it, does it make a sound? Yudkowsky imagines two people arguing over this riddle. One asserts, "yes, by definition: acoustic vibrations travel through the air," the other responds, "no, by definition: no auditory sensation occurs in anybody's brain." These two are clearly applying different definitions to the same word. In order to reach consensus, they must agree on a single definition.

These two haven't committed any fallacy yet, each is reasoning correctly from their own definitions. But it is a pointless argument - as pointless as me arguing in English, when I only understand English, with a person who only speaks and understands Japanese. Fallacy begins, however, as in the following example.

Suppose you and I both understand and agree on what the word 'rainbow' refers to. One day, though, I'm writing a dictionary, and under 'rainbow,' I include the innocent looking phrase: "occurs shortly after rain." (Well duh, rain is even in the name.) So we go visit a big waterfall and see colours in the spray, and I say "look, it must have recently rained here." Con artists term this tactic 'bait and switch.' I can not legitimately reason in this way, because I have arbitrarily attached not a symbol to a meaning, but attributes to a real physical object. 

To show trivially that there is a valid form of argument from definition, though, consider the following truism: "black things are black." This is necessarily true, because blackness is exactly the property I'm talking about when I invoke the phrase "black things." It is not that I am hoping to alter the contents of reality by asserting the necessary truth of the statement, but that I am referring to a particular class of entities, and the entities I have in mind just happen to all be black - by definition. 

One might complain, "but I prefer to use the phrase 'black things' not just for things that are black, but also for things that are nearly black." This would certainly be perverse, but it's not in any sense I can think of illegal. Fine, if you want to use the term that way, you may do so. I'll continue to implement my definition, which I find to be the most reasonable, and every time you hear me say the words "black things," you must replace the words with any symbol you like that conveys the required meaning to you. Your symbol might be a sequence of 47 charcoal 'z's marked on papyrus, or a mental image of a yak, I don't care. 

Yes, our definitions are arbitrary, but arbitrary in the sense that there is no prior privileged status of the symbols we end up using, and not in the sense that the meanings we attach to those symbols are unimportant.

Here's an example from my own experience. Several times, I have tried unsuccessfully to explain to people my discovery that ethics is a scientific discipline. (By the way I'm not claiming priority for this discovery.) The objections typically go through 3 phases. First is the feeling of hand-waviness, which is understandable, given how ridiculously simple the argument is: 

them- No way, its too simple. You can't possibly claim such an unexpected result with such a thin argument.
me- OK, show me which details I've glossed over.
them- [pause...] All right, the argument looks logically sound, but I don't believe it - look at your axioms: why should I accept those? Those aren't the axioms I choose.
me- Those aren't axioms at all. I don't need to assume their truth. These are basic statements that are true by definition. If you don't like the words I've attached to those definitions, then pick you own words, I'm happy to accommodate them.
them- YOU CAN'T DO THAT! You're trying to alter reality by your choice of definitions....


And that's the final stumbling block people seem to have the biggest trouble getting over.

Definitions are important. If you think that making definitions is a bogus attempt to alter reality, then be true to your beliefs: see how much intellectual progress you can make without assigning meanings to words. The new <fanfare!> Maximum Entropy Glossary </fanfare!> is an attempt to streamline intellectual progress. If you engage with anything I have written on this blog, then you engage with meanings I have attached to strings of typed characters. In some important cases, I have tried to make those meanings clear and precise. If you find yourself disagreeing with strings of characters, then you are making a mistake. If you disagree with the way I manipulate meanings then we can discuss it like adults, confident that we are talking about the same things.



Friday, December 21, 2012

Jargon



I often marvel at the achievements of the early scientific pioneers, the Galileos, the Newtons, and the like. Their degree of understanding would have been extraordinary under any circumstances, but as if it wasn't hard enough, they had almost no technical vocabulary to build their ideas from. They had to develop the vocabulary themselves. How did they even know what to think, without a theoretical framework already in place? Amazing. But at other times, I wonder if their situation was not one of incredible intellectual liberty, almost entirely unchained by technical jargon and untrammelled by rigorous notation. Perhaps it was a slight advantage for them, not to have those vast regions of concept space effectively cut off from possible exploration by the focusing effects of a mature scientific language. Standardized scientific language may or may not limit the ease with which novel ideas are explored, but I think there are strong grounds for believing that jargon can actively inhibit comprehension of communicated ideas, as I now want to explore.

Its certainly true that beyond a certain elementary point, scientific progress, or any kind of intellectual advance, is severely hindered without the existence of a robust technical vocabulary, but we should not conflate the proliferation of jargon with the advance of understanding. Standardized terminology is vital for ‘high-level’ thought and debate, but all too often, we seem to see this terminology as an indicator of technical progress or sophisticated thought, when it is the content of ideas we should be examining for such indications.


There is a common con trick, one is almost expected to use in order to advance one’s self, which consists of enhancing credibility by expanding the number of words one uses and the complexity of the phrases they are fitted into. It seems as though one is trying to create the illusion of intellectual rigour and content, and perhaps it’s not a bad guess to suggest that jargon proliferates most wildly where intellectual rigour is least supported by the content of the ideas being expressed. Richard Dawkins relates somewhere (possibly in ‘Unweaving the rainbow’) a story of a post-modernist philosopher who gave a talk, and in reply to a questioner who said that he wasn’t able to understand some point, said ‘oh, thank you very much.’ It suggests that the content of the idea was not important, otherwise the speaker would certainly have been unhappy that it was not understandable. Instead, it was the level of difficulty of the language that gave the talk its merit.

It has been shown experimentally that adding vacuous additional words can have a powerful psychological effect. Ellen Langer’s famous study1, for example, consisted of approaching people in the middle of a photocopying job, and asking to butt in. If the experimenter (blinded to the purpose of the experiment) said “Excuse me, I have 5 pages. May I use the xerox machine?,” a modest majority of people let her (60%), but if she said “Excuse me, I have 5 pages. May I use the xerox machine, because I have to make copies?” the number of people persuaded to step aside was much greater (93%). This shows clearly how words that add zero information can greatly enhance credibility - an effect that is exploited much too often, and not just by charmers, business people, sports commentators, and post-modernists, but by scientists as well. The other day I was reading an academic article on hyperspectral imaging, a phrase that made me uneasy - I wondered what it was - until I realised that ‘hyperspectral imaging’ is exactly that same thing as, yup, ‘spectral imaging.’

Even if we have excised the redundancy from jargon-rich language, I often suspect that technical jargon can actually impede understanding. Just as unnecessary multiplicity of terms can enhance credibility at the photocopier, I suspect that recognition of familiar jargon gives one an easy feeling which is too often confused for comprehension. You can test this with skilled scientists, by tinkering just a little bit with their beloved terminology, and observing their often blank or slightly panicked expressions. Once, when preparing a manuscript on the lifetimes of charged particles in semiconductors (the lifetime is similar to the half life in radioactivity), in one place I substituted ‘lifetime’ with the phrase ‘survival time.’ When I showed the text to a close colleague (and far better experimentalist than me) for comments, he was very uncomfortable with this tiny change. He seemed unable to relate this new phrase to his established technical lexicon.

You might think that this uneasiness is due to the need for each scientific term to be rigorously defined and used precisely, but its not. Scientists mix up their jargon all the time quite freely, and without anybody batting an eyelid most of the time. I have read, for example, an extremely technical textbook in which an expert author copiously uses the term ‘cross-section’ (something related to a particle’s interactability, and necessarily with units of area) in place of frequency, reaction probability, lifetime, mean free path, and a whole host of concepts, all somewhat related to the tendency of a pair of particles to bump into each other. Nobody minds (except for grumpy arses like me), simply because the word is familiar in the context.

Tversky and Kahneman have provided what I interpret as strong experimental evidence2 for my theory that jargon substitutes familiarity for comprehension. Two groups of study participants were asked to estimate a couple of probable outcomes from some imaginary health survey. One group was asked two questions in the form ‘what percentage of survey participants do you think had had heart attacks?’ and ‘what percentage of the survey participants were over 55 and had had heart attacks?’ By simple logic, the latter percentage can not be larger than the first as ‘over 55 and has had a heart attack’ is a subset of ’has had a heart attack,’ but 65% of subjects estimated the latter percentage as the larger. This is called the conjunction fallacy. Apparently, the greater detail, all parts of which sit comfortably together, creates a false sense of psychological coherence that messes with our ability gauge probabilities properly.

The other group was asked the same questions but worded differently: ‘out of a hundred survey participants, how many do you think had had heart attacks, how many do think were over 55 and had had heart attacks?’ Subjects in the second group turned out to be much less likely to commit the conjunction fallacy, only 25% this time. This seems to me to show that many people can comfortably use a technical word, such as ‘percentage’,  almost every day, without ever forming a clear idea in their heads of what it means. If the people asked to think in terms of percentages had properly examined the meaning of the word, they would have necessarily found themselves answering exactly the same question as the subjects in the other group, and there should have been no difference between the two groups’ abilities to reason correctly. Having this familiar word, ‘percentage,’ which everyone recognizes instantly, seems to stand in the way of a full comprehension of the question being asked. Over reliance on technical jargon actually does impede understanding of technical concepts. This seems to be particularly true when familiar abstract ideas are not deliberately translated into the concrete realm.

When I read a piece of technical literature, I have a deliberate policy with regard to jargon that greatly enhances my comprehension. As with the ‘hyperspectral imaging’ example, redundancy upsets me, so I mentally remove it, allowing myself to focus on the actual (uncrowded) information content. In this case, I actually had to perform a quick internet search to convince myself that the ‘hyper’ bit really was just hype, before I could comfortably continue reading. Once all the unnecessary words have been removed, I typically reread each difficult or important sentence, with technical terms mentally replaced with synonyms. This forces me to think beyond the mere recognition of beguiling catchphrases, and coerces an explicit relation of the abstract to the real. Its only after I can make sense of the text with the jargon tinkered with in this way that I feel my understanding is at an acceptable level. And if I can’t understand it after this exercise, then I have the advantage of knowing it.

For writers, I wonder if there is some profit to be had, in terms of depth of appreciation, by occasionally using terms that are unfamiliar in the given context. The odd wacky metaphor might be just the thing fire up the reader's sparkle circuits.






[1] The Mindlessness of Ostensibly Thoughtful Action: The Role of "Placebic" Information in Interpersonal Interaction, Langer E., Blank A., and Chanowitz B., Journal of Personality and Social Psychology, 1978, Vol. 36, No. 6, Pages 635-42 (Sorry, the link is paywalled.)

[2] Extension versus intuitive reasoning: The conjunction fallacy in probability judgment, Tversky, A., and Kahneman, D., Psychological Review, 1983, Vol. 90, No. 4, Pages 293–315