Tuesday, 12 February 2008

Knowledge and pseudo-knowledge

It’s often easy to distinguish prejudice from informed opinion. Prejudice is an opinion ‘without visible means of support’ whilst the holder of an informed opinion can back it with arguments and examples. The holder may be part of a professional community with its own journals, models, prizes, etc.

But is every subject with these features a form of knowledge? It turns out that quite a lot of them are not.

One relatively easy way to determine whether a subject is really knowledge is to ask whether the people practicing it can make valid predictions. By valid I mean that their predictions are correct significantly more than half the time and much more often than the predictions made by non-experts.

[Some subjects, eg theology, media studies, contain many theories that make no predictions. I have discussed three of these separately.]

Pseudo-subjects are rare, possibly unknown, in science and engineering because these subjects have institutionalised methods for testing their theories. They are much commoner in medicine largely because of the power of the placebo effect. For instance, Homeopathy makes predictions about the effects of its treatments. When carefully tested to exclude placebo effects, eg in double-blind clinical trials, these predictions are generally wrong.

They are commoner still in the subjects that study human activity, eg Politics, Economics and the humanities.

In The Black Swan – The Impact of the highly improbable (Allen Lane, 2007) Nassim Taleb looks at the evidence on the accuracy of predictions made in Security analysis, Political science and Economics. In each case he finds there are few studies but that those that exist show the predictive power of these fields to be poor.

Security analysis

Securities are tradeable financial instruments such as shares, options, collateralised loan obligations. The job of the securities analyst is to advise investors which to buy and which to sell.

T Tyszka and P Zielonka (Expert Judgements: Financial analysts versus weather forecasters. J. of Psychology and Financial Markets, 2002, vol 3(3), p 152-160) found that, compared to weather forecasters, security analysts are worse at predicting but have more faith in their predictions. An analysis communicated by Philippe Bouchard showed that predictions in this area are on average no better than assuming this period will be like the last period, ie worthless.

This is despite the analysts’ extensive knowledge of the firms and sectors they study!

Political science

Philip Tetlock (Expert Political Judgement: How good is it? How can we know?, Princeton University Press, 2005) asked about 300 supposed experts to judge the likelihood of certain events within the next five years. He collected c27,000 predictions. He found:

· Experts greatly overestimated their accuracy

· Professors were no better than graduates.

· People with strong reputations were worse predictors than others.


Taleb found no systematic study of the accuracy of economists’ predictions. Such evidence as exists suggests that they are just slightly better than random. For instance Makriades and Hibon (The M3-Competition Results, Int. J. Forecasting, 2000, vol 16, p 451-476) ran competitions comparing more and less sophisticated forecasting methods. They found that sophisticated methods, like econometrics, were no better than very simple ones.

Common threads

In each case:

· The experts know a lot more facts than the layman.

· They also know more theories about their fields.

· There is little published research about the accuracy of these theories.

· Supposed experts are not familiar with what little there is.

· Predictions cluster – experts copy each other.

· Predictions do not generally cluster around the true value.

These fields may have value as sources of analogies and models. Sometimes even remote analogies and very simple models can be helpful.

However, their predictions are generally worthless.

Theories that make no predictions

It’s interesting to look at three subjects (ie sets of facts and theories) that don’t seem to make predictions – and certainly give little thought to either predictions or testing. They are Theology, critical theory and business studies.

Analysis shows surprising parallels between them.


Theology does not seem to make predictions. Most theology is aligned with a single major religion, or even a denomination or sect, though some ideas are shared. Theology has existed as a discipline for at least 1,500 years and has shown some evolution. However it doesn’t seem to show much progress.

Theology has three fundamental problems:

· There’s doubt as to whether its subject matter – God – actually exists. Even saints and eminent churchmen have expressed doubts on this matter (though not usually in public).

· There’s no agreement as to how theological ideas are to be tested. Ideas may be tested against holy books (but which?), or tradition (which?) or personal revelation (whose?). Without this agreement progress is probably impossible.

· The views of scholars can be overruled by the official utterances of religious leaders.

Theology is thus the projection of religious feelings into the realm of reason and morals. There seems no reason to expect that such feelings will be good guides to the nature of reality and thus it should not be judged by its ability to produce predictions.

How it should be judged is another matter – and not one to which I have an answer. However, unless theologians can produce an answer it seems difficult to see why anyone should pay it any attention.

Critical theory

Critical theory is a collection of theories used by scholars in the humanities. These theories include Marxism, psycho-analysis and post-structuralism. An introduction to critical theory will mention several dozen theories. It’s immediately clear to an interested outsider that this isn’t at root a quest for truth in the sense that the sciences are. For while science has many controversies – and occasional feuds – these eventually get resolved with the successful theories being consolidated into the larger body of science. This hardly happens in critical theory – though some theories have become unfashionable.

If critical theory is not a quest for truth what is it? Critical theory is, I believe, a form of politics. Its theories reflect, often explicitly, political movements in society. They are the projection of those movements and issues into the analysis of cultural products such as books, films and clothes. Now to the degree that this analysis is right it should not be judged by its ability to produce predictions but by its ability to produce social change. That’s not a judgement that I shall attempt here.

Business studies

I use the term business studies to cover research and analysis published by business schools and management consultancies. There is a great deal of such material and the quality varies greatly.

· Some is rigorously empirical. It treats businesses as phenomena whose behaviour can be studied. This can lead to predictions. The results of testing such predictions are reported VERY occasionally.

· Some of this material is usefully analytical. That is, it dissects a significant business problem in order to identify threats, opportunities and constraints. It can be useful to managers without making predictions.

· Much, however, is weak. It takes a small number of examples – often selected on no clear basis – and draws conclusions that seem largely subjective.

As with theology and critical theory there’s plenty of change but little clear progress. There’s little evidence of an accumulation of agreed facts and theories and little referencing of the work of others. Often the same ideas appear at different times under different names.

Furthermore, names are often used by consultancies and research houses as sub-brands and there are real material rewards for consultants and analysts who create names that are adopted by the market. Thus the published material often reflects the competition between business schools, for students, and between consultancies, for clients. (Such competition is not absent in theology and critical theory but it is more marked here.)

If business studies is judged by the volume of valid predictions it performs poorly but many analysts and consultants would say that that is not its main purpose. Its purpose, they would say, is to recommend actions. How, then, should such recommendations be judged?

It is possible – medicine has the same purpose. Recommendations should be judged by their outcomes. This, however, is typically very difficult. The number of companies adopting any given recommendation may be small – and they may be taking other initiatives in parallel. And they may well treat all their initiatives as confidential.

Friday, 18 January 2008

Why knowledge is NOT “justified true belief”

In Theaetetus, Plato’s Socrates argues that knowledge is “justified true belief”. That is, it is, a belief for which the believer has a justification and that is, in fact, true (see en.wikipedia.org/wiki/Epistemology). This doesn’t really work.

Firstly defining knowledge as belief excludes tacit knowledge, such knowing how to ride a bicycle. My ability to ride a bicycle does not depend upon my having any particular beliefs about riding or, indeed, bicycles. So Socrates’ definition can only apply to explicit knowledge.

To see why it doesn’t even work for explicit knowledge consider the following truth table for beliefs.





Is the belief justified?





Is the belief true?





Does it meet Socrates’ definition?





Justification: I agree with Socrates in wanting to distinguish knowledge from mere guesswork. To count as knowledge a belief needs to be justified. Suppose you believe that Hungary is an attractive market for your latest product. This is a justified belief if you can produce evidence for it. I would accept market research and the success of similar products as sufficient evidence so if you have that data I’ll accept your belief as justified. I’m prepared to agree that a belief is justified even if the evidence is not conclusive, as it won’t be in this case until you’ve tried to sell it in Hungary.

Truth: The problem comes in distinguishing cases 1 and 2 in the table. Is your belief about the Hungarian market true? The only way that I can know this is to wait for the results of your Hungarian launch. And if you never launch your product in Hungary then neither of us will ever know if your belief was true – and therefore knowledge. This is inconvenient but not absurd.

How about the market research? Is that knowledge, ie is the data you have accurate and appropriate? You believe so and you have reasons for doing so, eg that you used a reputable firm with experience of the Hungarian market. You certainly have a justified belief. But can you know that it’s true? You cannot be certain, so this, also, does not meet Socrates’ definition of knowledge.

Not truth but confidence

The reality here is that beliefs can be held varying degrees of confidence. We can rarely be certain but are often, justifiably, confident. And we determine the proper degree of confidence using just those arguments that constitute the justification for the belief.

So knowledge, I suggest, is those beliefs where the arguments and evidence in favour make them seem more likely than not. Beliefs where the arguments and evidence in favour fall short of that are rumours, opinions or prejudices.

This view has two immediate consequences:

  • In applying knowledge we should consider how sure we are of its truth. We should not claim certainty where the evidence is doubtful.
  • Systems that store knowledge should include assessments of reliability and/or pointers to the sources of the supposed knowledge.

Thursday, 17 January 2008

Now ‘models’ are replacing ‘views’

Think about accounts. All businesses have to produce financial accounts for their auditors, shareholders and regulators. Back in the bad old days of BC – Before Computers – those were often the only accounts produced. Now, of course, almost every business also produces a variety of routine management accounts plus ad hoc analyses as needed.

These are possible both because we have computers to do the work and because we keep lots of financial data in databases. This data constitutes a model of the business (see my post on Types of Model). To the degree that it’s a good model all the required accounts can be derived from it. The accounts are views of the model and there are an unlimited number of valid views.

This change from creating a few predefined views to creating a model is not restricted to accounting. In fact it’s pretty general.

The trend to model building
In the past when people wanted to communicate a design or understand a thing or process they created a view of the thing or process. These views included maps, accounts and blueprints and required special materials, tools and skills. Usually sets of these were needed to define a territory, business or design and it was difficult to keep them in synch. Each kind of view was defined by a list of allowed elements or features (a meta-model); other elements and features being either ignored or indicated by annotations.

Now an organization is increasingly likely to build a digital model of the thing of interest from which it can derive any number of views. The model is also defined by a list of allowed elements or features but a longer list than for any view. From the model we can produce both familiar and novel views and there is no synchronization problem.

Examples include:

  • Maps: Many maps are possible for any territory. For instance they may show or omit roads, railways, and contours. Nautical charts show almost nothing on land but a great deal about the sea. Ordance Survey has digitised its map data and derives actual maps from this resource. Many companies now have Geographical Information Systems that allow them to combine their own data with that available publicly.
  • Accounts: Databases of assets and transactions support many kinds of financial and management accounts.
  • Engineering design: Traditionally engineers produced plans, front and side elevations and cross-sections. CAD models can yield both blueprints and lists of parts and jobs.
  • Building design: Construct IT at Salford Univ. has proposed that building projects should be based on a shared database that fully defines the building.

Being digital these models support many kinds of analysis and processing that were either impossible or very expensive when only views were available, eg calculations of load, simulation of performance or experience.

  • Civil engineers can show what their constructions will look like when complete.
  • Aeronautical engineers can simulate airflow and thus calculate performance and fuel efficiency.

Back to accounting

However, most accounting ‘models’ are not good enough to simulate the consequences of changes in processes or trading conditions. Some organizations have built good enough models but not, generally, as part of their accounts.

Types of model

In The Starting Point, my first post on this blog, I argued that “(B) Much knowledge can be seen as … models. That’s obvious to some degree but it raises the question of what constitutes a model.

There are at least four kinds of model: Structural, taxonomic, developmental and causal. There are also metamodels.

In The Starting Point I argued that “(A) Knowledge is most interesting and important where it is general.” Some models are very general. Thus most fundamental models in physics can apply anywhere in the universe. Some are entirely specific, eg the UK Treasury’s model of the UK economy applies only to the UK and probably for only a few years. For now I only note this distinction. It may be desirable to formalise it at some future point.

Structural models

Structural models show the structure of an actual or proposed object. They may be physical or virtual. Structural models are used in many areas including medicine, chemistry and the various kinds of engineering.

Doctors have created generic models of the body’s skeleton, nerves, blood vessels, etc. which are used in medical education and to guide surgery. In sensitive cases exploratory operations and non-invasive scans (using X-rays, ultrasound or MRI) are used to create models of an individual patient’s body. Another recent advance has been the construction of full-size

Chemists have long used structural models of molecules to help them reason about their properties and reactions. For many years they were drawn on paper or built using rods and balls but now they are increasingly likely to be electronic. Electronic models allow calculation of, eg, molecular shapes.

Engineers used to rely on drawings to communicate their designs to clients and those who have to build them. They increasingly use 3D models which also support design work and construction. During design they enable stress calculations, simulation of performance, compatibility, etc. They may generate lists of required materials, work allocations and control data for numerically-controlled tools.

Taxonomic models

A taxonomic model is a set of categories with allocation rules. These rules are usually text for use by a human classifier but may be executable. Thus, in 1999 the BBC automated the allocation of incoming news reports to its own 5,000 news categories. Journalists use these categories to select the reports they need. The system, News On-line (NEON), replaced the people who had previously done this job.

Some taxonomies are very simple. For instance, the states of matter are solid, liquid, gas and plasma. These are often stable.

Others are very large and may evolve continuously. Well-known large-scale taxonomies include:

  • In science, the Periodic Table of the Chemical elements and the Linnaean taxonomy of living things.
  • In business, Standard Industry codes (SIC), the UN Product Classification, the Yellow pages categories.
  • In marketing, the Mosaic set of consumer profiles.
  • In information retrieval, the Dewey decimal system and the Yahoo ontology.

We sometimes find that a causal model underlies a taxonomy, eg, blood groups. Sometimes this is known first; sometimes only later. Thus:

  • The distinctness of the chemical elements reflects the quantum mechanics of atomic nuclei.
  • The Linnaean taxonomy reflects the evolution of living things – a phenomenon that was not understood in Linnaeus’ time.
  • The Mosaic profiles reflect people’s lifestyle choices and resources

This also applies to sub-atomic particles and blood groups but not (so far) to genres or Standard Industry codes (SIC), the Dewey decimal system or the Yahoo ontology.

Developmental models

A developmental model asserts that its subject, eg an organism or a market, must pass through a series of stages. There are many stages models. Amongst the best-known are those developed by Piaget in the area of child development.

A well-known business example is Geoffrey Moore’s market development model:

  • Innovators
  • Early adopters
  • Chasm
  • Early Majority
  • Late Majority
  • Laggards

(See Crossing the chasm by Geoffrey Moore).

Like a taxonomy a developmental model may be based on a causal model or may be purely empirical.

Causal models

Causal models show how events lead to consequences.

The most basic causal models are purely indicative – little more than a list of factors that predispose to a result.

At the next step up are empirical models. These models produce forecasts by extrapolating from previous experience. Many economic and financial planning models are of this kind.

The best causal models are mathematical and allow quantitative prediction of consequences. They may be tacit or explicit. Tacit causal models may be no more than correlations. Explicit causal models, eg Newtonian mechanics, include explanations.

Some models are hybrid. Typically they use theory-based formulae where they are available and empirical formulae elsewhere. The Dupuy Insitute's Tactical Numerical Deterministic Model (see separate posting) appears to be a hybrid.

A causal model requires a taxonomy as foundation. That is, the entities in the model must be clearly defined. Sometimes the taxonomy predates the causal model but some causal models, probably including the most significant ones, require some revision of the taxonomy.


A metamodel is a general model that says, for one or more specific models, which features are significant and, sometimes, how they are represented.

Suppose a database contains a digital model of a gearbox. Underlying the database is a schema, an executable digital listing of the kinds of data items and relationships used to store that model. This schema is the metamodel for the gearbox model and would be equally applicable to other gearboxes and, probably, to a wide range of engineered structures.

In principle there are metametamodels – but these are only needed by, for instance, people designing new database and knowledge management systems.

Tuesday, 15 January 2008

TNDM: A predictive model for warfare

The Dupuy Institute's Tactical Numerical Deterministic Model (TNDM) (Economist. Technology Quarterly; 17 Sept 05) is a forecasting system with an excellent track record in forecasting the durations and casualties in armed conflicts. The Institute has a database of raw information on which its analysts perform extensive statistical analysis to identify patterns and trends. TNDM is available commercially (for $93,000 in 2005) and has been bought by both governments and arms suppliers. The Swedish government uses TNDM to propose new kinds of weapons.

The model includes many factors some technical, eg types and characteristics of weapons and armour, some geographical, eg presence of rivers, some tactical, eg disposition of troops, some logistical and even the matter of morale.

TNDM is generally more accurate than other 'war forecasting' systems because:

  • It's based on real data
  • It's model is based rigorous analysis rather than the wishful thinking of arms manufacturers and military organisations.
  • It's model has been repeatedly tested against actual experience.
The success of TNDM is significant because it deals with human behaviour (which is sometimes claimed to be wholly unpredictable) and because it shows the success of empirical, 'scientific' method in an area remote from the physical sciences.

Tuesday, 1 January 2008

Machines and tacit knowledge

The knowledge management (KM) literature distinguishes between tacit and explicit knowledge. Explicit knowledge is the knowledge you can state. You can answer questions on it (so public examinations and pub quizes are both tests of explicit knowledge).

Tacit knowledge is the knowledge you have that you can't state. The usual example is riding a bicycle. Although I can do it I can't explain how I do it - or not in a way that will help you to learn to do it.

Similarly, many professionals have skills that can be seen as examples of tacit knowledge. For instance, the best sales staff can choose the right approach to each customer. The best writers can choose the best way to make each point. The best negotiators adopt the right tactics in each negotiation. The figure shows a few examples. (Method for human-powered flight is shown as passive because I don't have the required muscles for it.)

In each case the best practitioners outperform the others on objective measures. Their tacit knowledge has real business value for them and their employers.

This distinction matters because tacit and executable knowledge must generally be taught in different ways. Insofar as tacit knowledge can be taught at all it's taught through experience rather than lectures.

Most KM experts assume that this distinction is only relevant to human knowledge. All other knowledge, eg that in books, is explicit. Underlying this assumption is the further assumption that all non-human forms of knowledge are passive. That is, they can only be put to work by a person who must first make the knowledge their own.

Both these assumptions are wrong.

Knowledge can be embodied in things and machines in several ways and in some of these it can be applied by the machine. The next figure shows the possibilities.

Jigs and machine tools
A jig (when not an Irish folkdance)is a template or guide used to ease the making of multiple items to the same design. The jig thus embodies part of the design. In the pre-industrial age jigs were made by craftsmen for their own use but industrialisation led to specialisation and the making of jigs by craftsmen for use by less-skilled workers. The jig therefore replaced part of the workers' knowledge.

During the 20th century more and more of the workers' skill was replaced by machines. Jigs and semi-automatic machines were succeeded by numerically-controlled machine tools and robots. In most cases the tools and robots do not contain the design of the thing being made in an explicit form; you can't answer questions about the design by inspecting the controlling programs - or not easily. But they do contain it tacitly.

Many programs can perform tasks which, if done by a person, would require knowledge. Consider the setting of the premium for motor insurance. After you have input your details, mileage, motoring convictions and so forth to a website the insurance company runs a program which calculates a premium.

The formula used to calculate the premium is clearly present in the program and can be extracted by study. (In practice the amount of study needed may be very great. Cases in which it has proved too great for the programmers attempting the task are far from unknown.)

Before such programs existed premiums were decided by underwriters, who applied their skill and knowledge, or by clerks who applied formulae supplied by underwriters. Today's programs stand in the same relation to the underwriters as the clerks of yesteryear. They contain explicit knowledge and are active (ie executable).

Artificial neural networks
Artificial neural networks (ANN) are combinations of programs and data that mimic very simple brains. Unlike ordinary programs ANNs must be taught by being trained repeatedly on large sets of data. They can be taught to perform a variety of tasks including some, eg risk assessment, that are valuable in business. Once they have been taught they can perform these tasks reliably.

However, the knowledge that appears to be being used by an ANN cannot be found within it. This knowledge is therefore executable and tacit.

Calling this knowledge tacit is more than an analogy. ANNs work best on tasks for which humans also need tacit knowledge, eg bicycle riding. These are tasks on which people cannot be instructed and computers cannot readily be programmed. People and ANNs learn these tasks by repeated trials. There are good reasons to think that brains do work somewhat like ANNs when learning these tasks.