It’s often easy to distinguish prejudice from informed opinion. Prejudice is an opinion ‘without visible means of support’ whilst the holder of an informed opinion can back it with arguments and examples. The holder may be part of a professional community with its own journals, models, prizes, etc.
But is every subject with these features a form of knowledge? It turns out that quite a lot of them are not.
One relatively easy way to determine whether a subject is really knowledge is to ask whether the people practicing it can make valid predictions. By valid I mean that their predictions are correct significantly more than half the time and much more often than the predictions made by non-experts.
[Some subjects, eg theology, media studies, contain many theories that make no predictions. I have discussed three of these separately.]
Pseudo-subjects are rare, possibly unknown, in science and engineering because these subjects have institutionalised methods for testing their theories. They are much commoner in medicine largely because of the power of the placebo effect. For instance, Homeopathy makes predictions about the effects of its treatments. When carefully tested to exclude placebo effects, eg in double-blind clinical trials, these predictions are generally wrong.
They are commoner still in the subjects that study human activity, eg Politics, Economics and the humanities.
In The Black Swan – The Impact of the highly improbable (
Security analysis
Securities are tradeable financial instruments such as shares, options, collateralised loan obligations. The job of the securities analyst is to advise investors which to buy and which to sell.
T Tyszka and P Zielonka (Expert Judgements: Financial analysts versus weather forecasters. J. of Psychology and Financial Markets, 2002, vol 3(3), p 152-160) found that, compared to weather forecasters, security analysts are worse at predicting but have more faith in their predictions. An analysis communicated by Philippe Bouchard showed that predictions in this area are on average no better than assuming this period will be like the last period, ie worthless.
This is despite the analysts’ extensive knowledge of the firms and sectors they study!
Political science
Philip Tetlock (Expert Political Judgement: How good is it? How can we know?, Princeton University Press, 2005) asked about 300 supposed experts to judge the likelihood of certain events within the next five years. He collected c27,000 predictions. He found:
· Experts greatly overestimated their accuracy
· Professors were no better than graduates.
· People with strong reputations were worse predictors than others.
Economics
Taleb found no systematic study of the accuracy of economists’ predictions. Such evidence as exists suggests that they are just slightly better than random. For instance Makriades and Hibon (The M3-Competition Results, Int. J. Forecasting, 2000, vol 16, p 451-476) ran competitions comparing more and less sophisticated forecasting methods. They found that sophisticated methods, like econometrics, were no better than very simple ones.
Common threads
In each case:
· The experts know a lot more facts than the layman.
· They also know more theories about their fields.
· There is little published research about the accuracy of these theories.
· Supposed experts are not familiar with what little there is.
· Predictions cluster – experts copy each other.
· Predictions do not generally cluster around the true value.
These fields may have value as sources of analogies and models. Sometimes even remote analogies and very simple models can be helpful.
However, their predictions are generally worthless.