When Do Statistics And Algorithms Trump Investing Judgement?

We’ve reviewed in an earlier article Daniel Kahnemann’s Prospect Theory, his latest book quot;Thinking Fast and Slowquot; and some of the key findings of Behavioural Finance and we’ll be discussing a number of the implications for investors in a subsequent piece…

However, in this article, we’d like to dwell in one interesting discussion in Chapter 21 of the book - quot;Intuition vs Formulasquot;. This discusses in some detail the efficacy and value of checklists amp; algorithms in addressing some of the predictable flaws in human decision-making -users of Stockopedia PRO will already know that these kinds of tools are a key part of our feature set. 

The Efficacy of Checklists 

Kahnemann writes that a key source of inspiration for his work was the book, Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence by Paul Meehl. Meehl was an American psychologist who studied the successes and failures of predictions in many different settings in the 1940s. He found overwhelming evidence that predictions based on mechanical (formal, algorithmic) methods of data combination outperformed clinical (e.g., subjective, informal, quot;in the headquot;) methods based on expert judgement. 

A famous example confirming Meehl’s conclusion is the “Apgar score,” invented by the anesthesiologist Virginia Apgar in 1953 to guide the treatment of newborn babies. The Apgar score is a simple formula based on five vital signs that can be measured quickly: Appearance, Pulse, Grimace, Activity,Respiration. It does better than the average doctor in deciding whether the baby needs immediate help. It is now used everywhere and saves the lives of thousands of babies.

Another amusing example of the power of statistical prediction is the Dawes formula for the durability of marriage. This formula apparently does better than the average marriage counselor in predicting whether a marriage will last. The formula is:

“frequency of love-making minus frequency of quarrels.”

Similarly, as Andrew McAfee of the Harvard Business Review points out, Princeton economist Orley Ashenfleter predicts Bordeaux wine quality using a simple model he developed that takes into accountwinter and harvest rainfall and growing season temperature. Although wine critic Robert Parker has called Ashenfleter’s approach “so absurd as to be laughablequot;, Ian Ayres notes in his great book Supercrunchers that Ashenfelter was right and Parker wrong about the ’86 vintage. Interestingly, McAfee also references a2000 paper whichsurveyed 136 studies in which human judgment was compared to algorithmic prediction. Only eight of the studies found that people were significantly better predictors of the task at hand.

Why algorithms beat judgement

In quot;Thinking Fast amp; Slowquot;, Kahnemann goes on to discuss the reason why experts appear to be inferior to algorithms. The main suggestion is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it just reduces validity. Kahnemann observes:

quot;Simple combinations of features are better… Several studies have shown that human decision makers are inferior to a prediction formula even when they are given the score suggested by the formula! They feel that they can overrule the formula because they have additional information about the case, but they are wrong more often than not. According to Meehl, there are few circumstances under which it is a good idea to substitute judgment for a formulaquot;. 

How applicable is this to investing? 

That all leads to thoughts about the finance/investing domain. Just how applicable is this particular conclusion to the finance/investing domain? Are there reasons to think that investing is different to, say, medicine or psychology in terms of the effectiveness of expert judgment?


The forecasting record of analysts would suggest otherwise – but that may be a cheap shot given that analysts suffer from a well-documented conflicts of interest where – as is often the case – they act as investment bankers to the companies their analysts cover.

James Montier’s excellent piece – An Ode to Quant – discusses this question in some detail and clearly takes the view that investing is unlikely to be different, although he notes some significant obstacles that exist for widespread acceptance of this view, i.e. 

 Firstly, the fear of technological unemployment. This is obviously an example of a self serving bias. If, say, 18 out of every 20 analysts and fund managers could be replaced by a computer, the results are unlikely to be welcomed by the industry at large. Secondly, the industry has a large dose of inertia contained within it. It is pretty inconceivable for a large fund management house to turn around and say they are scrapping most of the processes they had used for the last 20 years, in order to implement a quant model instead.

Of course, this is not to say that judgement has no place in investing at all. Changing market circumstances may invalidate a particular algorithmic strategy. Furthermore, without some degree of intuition, it would presumably not be possible to decide which set of parameters – given a potential infinite choice – to factor into a checklist in the first place.

Nevertheless, in the investing domain then, like many others, it seems that the implication of Meehl’s research have still not yet been fully accepted. Algorithmic/quant investing is seen as cold, clinical and brittle – and regarded with some degree of scepticism, while the quot;informed viewquot; of an expert analyst is much more comforting. But it remains to be seen how long this view will be sustained, should evidence continue to mount of better returns from the former approach. 

What are your thoughts? 

Further Reading

Open bundled references in tabs:

Leave a Reply