Sunday, February 2, 2014

Practical Morality, Part 2


It has been said that democracy is the worst form of government, except all those others that have been tried.
Winston Churchill 

(The second of two parts. Read the first installment here.)


Politics & Science

I have a funny little feeling that Churchill actually knew a small bit about politics. According to dear, old Winston, democracy sucks. But why does it suck? And does it necessarily suck?

A full analysis of these questions could run into thousands of pages, and obviously stretches far beyond any area in which I could claim expertise, but for now at least, I want to point out just one aspect of democracy's poor performance to date that can most definitely be fixed. That is, the failure so far of both politicians and the electorate to explicitly recognize the necessarily rational basis for morality.

Time and again, we see scientific experts consulted in order to obtain the best quality data possible to support some process of policy decision, only to see the elected politicians ignoring what they have been told, in favor of the decision that always suited their prior ideology. This is bad enough, but very often, the scientific analysis is never even sought. Somehow, this is seen by the voting public as acceptable. Worse still, it seems to be often treated as desirable. Certainly, it is something built into the contemporary political culture of many democracies.

This perverse situation is made possible, almost inevitable, in fact, by the widespread, mistaken belief that science has absolutely nothing to say about what is morally desirable. Under this insidious assumption, how could the expert scientist possibly have anything conclusive to say about morality? Morality is not the art of what is, but of what should be done, so evidently, we must enforce a clear division of labour, such that the data gathering is left to the expert scientist, while morality is left to the ethicist and the expert politician. Seriously, it's not as if politics can be reduced to questions of fact, is it?

There, the absurdity of the prevailing position exposed.

This position is so ubiquitous, it seems to be held even by many of the most respected (and powerful) scientists around. For example, in an episode of BBC Radio 4's "The life scientific," broadcast on October 2nd, 2012, Mark Wolport was interviewed by Jim Al-Khalili. Walport, who at the time was about to assume the position of chief scientific adviser to the British government, spoke about numerous things that made brilliant sense to me, but about 16 minutes in, he was asked what his attitude would be, should his advice be ignored by the politicians. This was his response:
It’s very important for an adviser to distinguish between what is the science, and then recognize that there may be a series of different decisions that you can take, and that’s then politics.
Quite clearly, he is making the point that at some instant, the science ends and then politics takes over, that the politician may choose to ignore the best quality advice (in favour of what? gut feeling? divine inspiration?), and that this is just fine: the scientist, incapable of judging human affairs, must deliver his evidence then keep his mouth shut. A little later, Wolport elaborated further on (in his apparent view) the strict divide between science and politics:
That's absolutely right, [politics isn't always based on reason,] politics is based on all sorts of things, it’s based on political ideology, it’s based sometimes on pragmatism, it’s based on choosing which battles to fight and which battles not to fight, but that’s as it were the distinction between scientific advice and political decisions. 
From somebody with Wolport's scientific credentials, I'd have expected to hear these comments followed by something to the effect that this is wrong, and this culture has to be changed. But it seems that this chief scientific adviser holds the view that this perceived distinction between rationally acquired understanding and the running of a country is right and proper.

It may well be that to some extent during this interview, Wolport felt unable to express his true views on this matter, having already seen the outcome when another prominent scientific adviser dared cross the UK. government. In 2009 David Nutt was sacked from his position as chairman of the Advisory Council on the Misuse of Drugs, by Home Secretary Alan Johnson. Nutt's apparent crime was to point out that the legal classification of recreational drugs in the UK was incommensurate with the best scientific measures of harm caused by drugs. Johnson's political career received no visible setbacks as a result of this action (and its strong inherent suggestion that he likes to play without the net).

The moment we realize that the question of what ought to be done is a question concerning matters of fact, and that matters of fact can only be answered more reliably when investigated more scientifically, then we begin to wonder how on Earth it can be acceptable for policy decisions affecting potentially millions of people to go against the best quality scientific advice available. What procedure could possibly justify such decisions? (At some point, a decision has been made, without using the decision procedure, that the decision procedure is broken!)

The politicians feel it is appropriate to ignore scientific advice, partly because the top scientists (like Wolport) are telling them this is so. They feel they have understanding and expertise that the scientist cannot tap into, because this is the prevailing culture: human needs can not be assessed by evidence and logic. Politicians are encouraged to invent their own dubious epistemologies, because society persistently fails to recognize the truth about moral realism, and the logical relationship between morality and science.

Within this culture, the politician is free, even expected, to employ his deliberately non-scientific judgement, often citing a mandate from the masses to justify manifestly unsound policies: 'who am I, a servant of the people, to defy popular opinion?' Well, sorry folks, but there are some things you just don't get to vote on. If I'm feeling unwell and go to the doctor, he will not say, "your test results are in, you either have 6 months to live, or its just a minor cold, you decide!"

Indeed, the elected politician is a servant of the people, and as such, trivially, has a duty to serve the interests of the population. This can only be done when a rational procedure enabling reliable predictions about the social outcomes of policy decisions is utilized. Another historic British statesman, Edmund Burke, said this, in 1774, which is apt:
Your representative owes you, not his industry only, but his judgment; and he betrays, instead of serving you, if he sacrifices it to your opinion.


Honesty As A Meta-Virtue

As I mentioned in Part 1, the possibility of moral relativism scares the living crap out of people. The kind of moral relativism I describe, (we've got to be careful, there are other kinds, that make little sense) follows as a trivial and necessary consequence of the moral realism I have outlined here and in earlier essays. What makes an act I perform moral is a combination of (a) the likely, (real) future state of the world, with v's without the act, and (b) my utility function, (the algorithm that assigns for me relative value to difference states of existence) which is a real, and objective property of the matter the composes my mind. We thus arrive at realism. We also arrive at the obvious conclusion that another decision-making entity with a different utility function, even if placed under identical circumstances to mine, may have radically different actions that count for it as moral.

In short, what makes it moral for me to pursue goal X is the trivial fact that I desire X (supported by a sound rational procedure - this is crucial!). I admit, this does have some chilling-sounding consequences, particularly when the parenthetical qualification is omitted.

Whether it is out of fear for what we ourselves might do should our assessment of value happen to change or out of alarm at the prospect that others might not share the same values as us (I suspect the latter as dominant), a major industry has grown up over the centuries to firmly establish a certain dogma. According to this dogma, the determination of morality is universal and absolute. There is no sense in which X could be moral for me, but immoral for another. In particular, the determination of morality consists of no self-serving component - what you desire counts for nothing, all that matters is the rules (Kant's categorical imperative).

The objective of this dogma is clear: a moral code can be established such that the capacity for a person to think 'outside the box' is completely eliminated. In our fearful state, we might selfishly breathe a sigh of relief, but essentially, it is a technology for destroying a person's autonomy.

In a famous paper of 19721, Philippa Foot has exposed the ridiculous absurdity of this dogma. According to the dogma (quoting from Foot):
Actions that are truly moral must be done "for their own sake," "because they are right," and not some ulterior purpose.

But what then motivates a person to be moral?

    "Stupid question," snorts the dogmatist, "it is the obvious fact that to be moral is good."

But what if I have no interest in what is good?

     "But you ought to be interested in what is good, if you were not, then you would not be a moral person!"

Um...., so the only possible reason to be moral is that it is immoral not to be.

Great. All that remains is to arbitrarily decide what is moral, so we can all toe the line. And it must be an arbitrary decision, mind you, for if it were not, then whatever non-arbitrary procedure might be used would constitute a motivating principle, violating the central moral dogma. In fact, the decision to decide what is moral must also be arbitrary - we could equally arbitrarily decide not to decide this. Now fuck off, and don't ask any more questions.


Ladies and gentlemen, this travesty, this grotesque parody of what the human mind is capable of, remains to this day the orthodox and default view of morality. For centuries, respectable intellectuals have been proudly going round in circles, confidently marching right up their own backsides, with arguments of exactly this type.

Utter nonsense as it is, might we not yet draw comfort from the status quo that this dogma provides? For as long as it is universally accepted, does it not serve well to minimize the incidence of crime and immorality? Let's not be too hasty.

What is universally accepted? Crime is a major problem for society, and crime ain't committing itself! Somebody is breaking the rules, and if we looked inside the mind of a criminal, it seems self evident that the message we'd receive would be something like: "Frankly, my dear, I couldn't give a toss about being moral or about being good. I don't give a damn about the rules."

The cases where the moral dogma has failed are perhaps exactly the cases where it (or something) was most needed. And I think it's a good bet that in many cases this failure is because what has been unsuccessfully drummed into the potential criminal's head has been such profoundly manifest gibberish, worthless circular garbage, not suitable to convince any self-respecting person with the tiniest inclination towards independent thought. An opportunity to correct an antisocial tendency has been lost, in a way that in hindsight seems almost inevitable.

So here's my crazy proposal: instead of making up any old absurd crap, and teaching that to our kids as the basis for appropriate behaviour, why don't we just tell them the truth? I think this policy has some serious potential advantages.

So maybe you don't care about rules for their own sake. This is good and proper - society needs more free thinkers. Disconnected from any consideration of consequences, rules are nothing more than sounds leaking out of people's mouths, and trails of ink on pieces of paper. But do you care about yourself? Ultimately, this is all you need to care about, in order to be good.

If you actually do care about your own wellbeing, (which a priori you must), then if you are being consistent, you must also care about adopting a sound strategy for achieving your goals, and it turns out that for reasons touched upon in Part 1, such strategies overwhelmingly involve cooperating with other people - fulfilling one's obligations laid out in the social contract. The profound effect of the social contract is that for me, as a fundamentally selfish entity, I do not merely need to act as if I care about other people. Rather, I actually do care, for naturally selected reasons, both biological and cultural. This, we can expect to hold true for the vast majority of humans, under the vast majority of conceivable circumstances - we have solid mathematics (game theory) capable of explaining how this comes about.

It seems to me that what I've just argued can not be refuted. However, whether people would tend to behave better or worse if such a message were to be adopted as the principal method of teaching morality is ultimately an empirical question. I do not know for certain. I don't claim to know exactly why people misbehave. But that this message ought to be better than the traditional dogma, I consider to be supported by very powerful arguments.

The principal advantages I see for my proposal, as opposed to continued appeal to the absolutist dogma, are (i) honesty, (ii) appeal to self interest, and (iii) coherence. If the main argument used in an attempt to stop me doing something I believe I want to do is a lie, then it seems there is a good chance that I will recognize it as a lie, and ignore it. This seems like a strong general tendency.

Similarly, if what I'm told is the basis for morality sounds suspiciously very much like the incoherent babbling of an imbecile, then I think the risk that I will not follow the prescribed moral code is enhanced. The absolutist dogma manifestly makes no sense, and must be expected to lose significant credibility as a result. For all I know, there may be a large population of law-abiding psychopaths, who avoid antisocial behaviour principally because of their indoctrination since birth in the old moral dogma (this seems to be a major fear people have when I discuss openly recognizing the truth of morality based on self interest), but is there any good reason to think that indoctrination in utter nonsense should be more effective than indoctrination in a moral methodology that makes natural good sense? Realistic moral relativism has the enormous advantage of exactly this kind of coherence, such that we do not need to anesthetize our brains to believe it.

This brings us to a further advantage of honest moral teaching: its success does not depend on the cultivated suppression of free thought and critical evaluation (behaviours above which, few can be ranked as higher virtues). When the people we trust most repeatedly tell us nonsense, and try to pass it off as truth, it is hard not to believe it. But the price of believing manifest gibberish is an internal crisis, known to psychologists as cognitive dissonance. It seems quite reasonable to suppose that a mind committed to holding incoherent propositions as beliefs must become adept at suppressing it's ability to recognize that incoherence. I think we can easily anticipate the dangers of this talent.

I opened this essay with a slightly depressing quotation from Winston Churchill about democracy. The ultimate goal, however, of considering moral realism in these two posts has been a fuller democratization of the social contract. With this goal in mind, then, let me end by offsetting that quote with a more positive piece of advice from the same extraordinary man:

Never, never, never quit.  





References



 [1]  Philippa Foot, "Morality as a system of hypothetical imperatives," Philosophical Review Vol. 84, pages 305 to 316, 1972. (Link)





5 comments:

  1. This is a difficult question. I appreciate your sharing your thoughts on this and it is nice to see someone with the same confluence of interests: probability theory (epistemology), economics and political philosophy.

    I totally agree about the danger of dogma (rules not supported by reason) and the importance of universality.
    But even though a number of signs and fingers point towards respect for self-ownership and property rights as a potential or even likely solution, none of the arguments seem bullet-proof.

    You can approximately derive the rules we tend to use in our daily lives from self-interest, but there remain some holes.
    If you want to achieve your goals and you recognize the benefits of cooperation (division of labor and specialization), then you are best served to recognize the norms dealing with scarce and rivalrous goods. If you violate them, you are putting yourself at risk of being excluded and exposed to others doing the same to you (universality, estoppel). So it is in your best interest to adopt those norms/morality.
    But this rationale derived from self-interest fails to explain why you should not steal if you are fairly certain you won't be seen.

    PS: the topic of democracy needs further discussion, but I'll keep that for another day.

    ReplyDelete
    Replies
    1. Hi Julien, thanks for your comment.

      But this rationale derived from self-interest fails to explain why you should not steal if you are fairly certain you won't be seen.

      We need to be careful. Whether or not morality supports a heuristic to the effect that stealing is wrong, even if you are guaranteed not to be caught is an empirical question. Just because you don't have a mechanism in your head, doesn't mean that one doesn't exist.

      The very fact that you seem to intuitively accept that the non-existence of such a mechanism would constitute a failure of the moral system suggests very strongly that such a mechanism does actually exist. If I told you that it was OK to do X, and you felt bad about that, it would imply that for you, doing X is bad. And since the moral system depends entirely on your utility function, then naturally, doing X is bad.

      I can suggest a rough outline of how such a principle, "it's wrong to steal even if nobody will know it was you," might come about. Supposing that it was accepted that it is OK to steal under such circumstances, then theft might become more common. If I gained an opportunity to steal, I would be more likely to do so. Every time I did so, however, it would contribute to a growing cultural acceptance of the normality of theft. We can predict several ways this could lead to a weakening of the social contract. Society would suffer, and I would likely suffer, as a result.

      Delete
    2. PS

      On the flip side, if the structure of the universe really did fail to provide a mechanism making cryptic theft typically immoral (hypothetically speaking), then by definition, this wouldn't matter. Any feeling to the contrary would be merely another illustration of how raw intuition can be a poor guide to how reality actually works.

      In fact, it's quite easy to imagine special cases where theft, or even violence, is moral. Forceful imprisonment of the criminally insane being an obvious example.

      Delete
    3. Sorry for the delayed response.
      I think you are right, self-interest plus universality can arrive to "stealing is wrong", but eliminate "getting caught is wrong" as a principle, even though the premise of self-interest alone is insufficient.

      Regarding your last point, I'm not so sure that it's easy to identify cases where violence (in aggression, not self-defense) would be moral. A criminally insane person that didn't harm anyone (yet) can be ostracized (not allowed on anyone's property), but I'm not sure violence (imprisonment) would be warranted.

      Delete
    4. Happy to continue the discussion.

      A classic thought experiment in moral philosophy is the trolley problem. Several versions exist, but usually, the point is that by pushing a button you can cause 1 innocent person to die, rather than say 5 (also innocent) people, if you don't. To make it easier, we can say that the default case is 5 million people dying, as opposed to 1 person dying if you push the button.

      I can't think of any principle that would make pushing that button not an act of violence against that poor individual, but, ceteris paribus, I would describe that act as moral.

      Any act of enforced punishment, be it for purposes of incapacitation, deterrence (of the guilty individual or of others), or rehabilitation, has similar properties - normally experienced freedoms are removed for the purpose of protecting others in society. Under many circumstances, such actions are appropriate.

      We could choose to define these actions as not violent (e.g. since they represent a form of self defense), but such a definition would not change the physics of what is happening. What matters, ultimately, is not what we call it, but whether such behavior is appropriate.

      By the way, the guy who desperately wants your $200, and is willing to shoot you for it, is also defending his perceived self interest. There are no black-and-white absolute principles of morality, other than acting rationally on the best available information, and thereby treating each case on its merits.

      Delete