Ignorance, nothing more or less, has led the cricket “fan” to trash the wonderful method developed by Frank Duckworth (left) and Tony Lewis © Getty Images
Ignorance, nothing more or less, has led the cricket “fan” to trash the wonderful method developed by Frank Duckworth (left) and Tony Lewis © Getty Images

When one falls seriously sick, one goes to a qualified doctor, not someone who dabbles in medicine in his leisure hours. Similarly to build a bridge or important building one hires an architect. Yet, the rather complicated field of cricket data analysis is rife with questionable methods of measurement, in many cases developed by individuals with little training in mathematics or statistics. Arunabha Sengupta cites reasons why many such analyses amount to nothing but nonsense.

Specialists and others

On the last day of the Physical Training course in a York gymnasium, preparatory to the forthcoming raid of Dieppe in 1942, a 24-year old sergeant instructor named Len Hutton attempted a ‘fly spring’. Unfortunately, the mat slipped under him and he crashed on the floor. The fall resulted in a complicated fracture in the left forearm and a dislocation of the ulna at the base of the wrist.

Within a few weeks, Hutton had been transferred to Wakefield where he found himself under the scalpel of the famed Leeds surgeon Reginald Broomhead. After multiple operations and bone grafts, the career of perhaps the greatest post-War opening batsman was saved.

What is important to note here, albeit obvious, is that Hutton was not treated by some bank accountant who dabbled in experimental medicine during his non-working days. He relied on a specialist.

Sounds too obvious? Bear with me.

Yes, that is obvious. When Hutton’s contemporary batting great Denis Compton faced all the complications surrounding the now famous knee-cap, he was operated on by the famous orthopaedic surgeon Bill Tucker. A few years before that, Bob Appleyard’s tuberculosis affected lungs had been surgically treated by thoracic surgeon Geoffrey Wooller.

From Don Bradman’s near-fatal appendicitis in 1934, treated by Sir Douglas Shields, to Sachin Tendulkar’s troublesome tennis elbow, restored to health by Dr Andrew Wallace, every known case of career threatening ailments has been treated by specialists.

The saga continues with Yuvraj Singh and Michael Clarke being revived by reputed oncologists.

If, God forbid, something as serious happens to you or me or any of our near and dear ones, it is a specialist we will consult, not some Fedex-employed delivery man who has a passing interest in medical science.

None of these players were seen walking back into action after following the prescriptions of playwrights, accountants, taxidermists, or men from any other profession who amused themselves with medical experiments during the weekends. It is not that such men did not offer their services. Compton had plenty of advisers who asked him to go for various incredible alternative therapies. Tendulkar had even more. But they trusted their doctors.


Because doctors go through serious structured training across a standard curriculum, which is followed by specialisation in order to perform the complicated task of diagnosing conditions and healing the human body. Education that cannot be compensated for by a passing interest and some ad hoc experimentation.

Similarly, to build a new pavilion at the Melbourne Cricket Ground or a bridge across the Thames, we would ask  qualified and proven architects, not a librarian who has pleasantly fantasised about architecture during his idle hours after reading Ayn Rand’s Fountainhead.

Specialist knowledge is considered important in the case of health or similarly important issues.

Yet, when we take the case of statistical analysis of cricket, a complicated endeavour with complex data that requires at least as much expertise as that of a qualified doctor treating diseases, we come across very little research work done by qualified statisticians.

Sporting discussion spaces have always had abundant sprinkling of interesting, occasionally useful and quite often meaningless amateur forays into statistics. Baseball has seen plenty of that. It is one of the charms of sports where figures are generated. But, that ‘statistics’ is almost always limited to representation of data in tabular and sometimes graphical forms, without much dwelling on, or awareness of, statistically robust inferences.

However, it becomes a rather serious threat to the understanding of the game when examples of absurd abuse of statistics, a predominant feature of amateur statisticians, find their way to the stature of seriousness through well orchestrated marketing and promotion on supposedly prominent cricket websites.

Also incredible is the assertion that statistical or mathematical background is rather unnecessary for conducting such analysis. Questioning the statistical expertise of people, who are consistently putting wool over the eyes of the populace through nonsense paraded as analysis, is often labelled fallacious.

Whereas my assertion is that most claims of path-breaking analysis techniques, some supposedly stumbled upon accidentally by armchair statisticians, are strikingly similar to claims that reiki or pineapple based nutrition or the habit of early morning cold bath has suddenly cured cancer.

Amateur breakthroughs are very, very rare these days. Leonardo da Vinci has been dead for nearly 500 years.

Playing at the low stakes

This is especially true when it comes to comparing performances of cricketers over careers, and evaluating how they have been effective in the team’s cause — an endeavour which involves serious use of statistics, specifically in the area of conditional probability. This sort of analysis needs specialist knowledge, make no mistake about that. As we shall show in the article, conditional probabilities are extremely counter-intuitive and difficult to work with.

Sometimes the use of a specialist with advanced statistical training is limited to just one vital fact. He can say with certainty that one magic number index is impossible to derive, especially when we are analysing every match played in history and putting all the players on the same measuring scale. A trained statistician comes with this added advantage. He or she knows the limits of using numbers. A publicity hogging pseudo-analyst, on the other hand, is blissfully unaware of such limitations.

One word of clarification before I start explaining in detail why knowledge of advanced statistics is important. By ‘statistician’, I am not talking about cricket statisticians. They are excellent men who do wonders with collecting and categorising data. When I say ‘statistician’ in this article, I am referring to men with advanced training in statistics the science.

Why is it that we are not ready to trust non-doctors when it comes to treating serious diseases, non-architects when building important structures, but are relatively tolerant of non-statisticians who claim they have striking new cricketing analysis to present before the world?

People without a degree in medical science become clueless when they get into the depth of technical terms of the condition or treatment. Hence, it is well-nigh impossible for them to fathom the quality of a doctor’s methods. It is similarly pretty difficult to evaluate the quality of a statistical analysis by non-statisticians. The scientific community relies on peer review, and this sort of due diligence is conspicuous by its absence in the world of cricket statistics.

Why is it that we still see a large number of people who tend to take these pseudo-statisticians at face value?

The obvious reason is that the stakes are way less.

When afflicted by a critical disease, we know we can lose a lot — even our lives — if we decide to opt for nonsense treatment. In case of a cricketing analysis, we can always revel in the results if they confirm our supposed ‘knowledge of the game’. If it does not perform the confirmation act, we can always say ‘numbers don’t really mean anything’, ‘the analysis is flawed’ and shoot down the work without evaluation.

But, when statisticians are engaged in clinical trials, the inference work that ascertains whether or not some drug for treating AIDS is effective, we can be rest assured we will see the use of serious statisticians. An actor with a passionate or personal interest in AIDS will not be the one doing the work. The stakes there are much higher.

Why is statistics difficult

The obvious question to be asked now is why am I saying statistics is that difficult to understand? After all, the numbers are there for all to see. How difficult can it be?

Yes, every single person can see the human body as well. Yet, diagnosis of diseases remains the domain of qualified doctors. Additionally, statistical and probabilistic thinking, by their very nature, is often counter-intuitive and the human brain is not naturally adapted to think that way.

Let me cite some examples.

Question 1: Let us assume that there are three rank tail-enders in a team, to bat at Nos. 9, 10 and 11. We know all of them are of the same batting calibre, and that one of them is Monty Panesar. Let us call the other two X and Y.

The captain says he has randomly allocated batting slots 9, 10 and 11 to the three tail-enders. If you can correctly guess which position Panesar will bat at, you will win $1,000.

Suppose you pick No. 10 and say so to the skipper. The captain now nods and tells you that he will give you the additional information. Of No. 9 and No. 11, he reveals one, say No. 9, and says that is not Monty. Now, he gives you the option of either sticking to No. 10, or shifting to No. 11 for the wager. The question is, will it increase your chances of winning the $1000 if you shift your bet to No. 11 now?

Read it again if required. Think about it. Will you switch your choice? Does it make any difference?

Question 2: The past wagon wheels show that batsman A scores 65% of his runs on the offside while batsman B scores 45% of his runs on the offside. You are given a wagon wheel of an innings of one of the two, without knowing which batsman, and it shows 55% of the runs on the off-side. Do you think it is the scoring chart of batsman A or batsman B? Which do you think is more probable?

Be honest while trying to answer the questions.

Well, if you answered Batsman A for the second question you can rejoice, because you are in the majority. A whopping 67 of the 89 responders we asked the question to said A. The correct answer is, however, B.  It can be proven with simple techniques of probability theory.

And about question 1: If intuition tells you that changing the bet will have no effect at all, it might be surprising to learn that a switch actually doubles your chances of winning.

There is a reason Monty Panesar was chosen as the tail-end batsman, and that has nothing to do with his diverse deeds on the pitch. The question is an adaptation of a famed poser known as the Monty Hall Problem. It just shows one how difficult probabilistic reasoning is.

If you are still with me, do look up Monty Hall problem in Wikipedia or elsewhere to get the details of the answer. Trust me, you will find it completely worth your time and effort.

Counterintuitive? Yes. If intuition and ‘our understanding’ was all there was to it, the sun would still be revolving around the earth. If these results make one feel like an alien, it cannot be helped. Of course one can choose to ignore them. There are flat-earth societies and groups who believe we never put a man on the moon. But, the truth is that statistics provides more insights into the past and it happens to be a rather difficult subject.

Getting more complicated

Let me state some more statistical results. And let me warn you that things will get complicated.

If we take the career of Glenn McGrath as the data set, we will find that the probability of his giving away less than 3.2 runs per over in an Ashes Test was approximately 0.78 or 78%.

This is derived from the formula:


We substitute 3.2 for x, 2.7 for mu and 0.594 for sigma, mu and sigma being the mean and standard deviation of Glenn McGrath’s bowling economy rate in the Ashes Tests.

That formula, in turn, can only be used after performing the Anderson-Darling Test which confirms that the bowling economy rates of McGrath in Ashes Tests do indeed follow Normal distribution. No, Anderson-Darling is not the new name of Ashes Tests after Jimmy Anderson and Joe Darling. It is a statistical test to check whether a data set follows Normal distribution.

And let me add here that this is one of the very few measurements in cricket that do indeed follow the Normal distribution. The runs scored by a batsman from series to series over his career fail the normality test other than in a few very exceptional cases. Even McGrath’s bowling average for Ashes Tests don’t follow the normal distribution. And if the data is non-Normal , analysing the same, performing  tests to check whether one data set is different from the other, become somewhat more complicated.

Let us take a relatively simpler example.

Let us consider all the opportunities taken and grassed by Rahul Dravid while fielding since 2001, from the time such data became available. We get the following disconcerting statistic — if 10 catches went to him, the probability that he would drop at least one of them was as high as 95.07%. In fact, data tells us that there was a high 50% probability that at least three catches out of the 10 opportunities would be dropped.  It may jar with our perception of the beloved and omnipresent slip fielder, and it may be difficult to digest that


is the formula that can give us that result, but the data and the statistical methods confirm it.

These two examples were used because here, fortunately, the data follow some standard distributions. Hence they are simple. In most cases we don’t have that luxury. Analysing in those cases, as stated, becomes increasingly difficult.

And in both these cases, we are talking of a single variable, not 22 dependent on each other as in a Test match. In a Test match, there are other confounding parameters — surface, condition, time frame, era, rain and others.

Yes, statistics the subject is rather more complicated than looking at two averages and deciding whether to put a greater than or a less than sign between them.

And while looking at averages and putting a > or < or = sign suffices for fan debates, for serious computation a lot more refinement is necessary.

If these look far too complicated to be cricket (“hey, it’s a game played with bat and ball and not calculators”), let me tell you something.  The cricketing injuries will sound equally complicated when explained in medical terms. So will the physics of the thermodynamic imaging that we see on Hotspot, in fact to me the last said is distinctly more complicated.

To do these analyses properly, one cannot afford to dumb things down to understandable-to-all terms. Our comfortable illusions of ‘understanding’ this favourite game of ours is a good way of self-deception, not much more. The statistics can show very different results.

The medical curriculum involves anatomy, physiology, histology, biochemistry, obstetrics and various other subjects. Similarly to earn a degree, a statistician has to go through mathematical and statistical topics ranging from advanced calculus, linear algebra, time series analysis, sampling theory, making scientific inferences to design of experiments, statistical modelling, stochastic processes and, of course, probability.  All these subjects form the base on which we can build meaningful analysis of complicated data as spewed out by cricket scorecards.

Probability Perils

Probability is of course at the heart of all statistics. And as we saw in the examples above, human beings are not geared to think in terms of conditional probabilities.

Take another two examples.

Question 3: Australia and South Africa play each other in a One-Day International (ODI).

These are the suggested outcomes
a) Australia wins
b) South Africa wins
c) Tie
d) Australia wins a hard-fought match
e) Match abandoned

One is asked to order the suggested outcomes according to the probabilities that he thinks will be associated to the options; or in order of likelihood of each of these options coming true.

Question 4: A is a star batsman of the team. When he fails, 70% of the time the team is in crisis. When he succeeds 60% of the time the team is not in crisis. He fails 15% of the times. Estimate the probability of A failing when there is a crisis.

Of the number of participants we asked Question 3, significantly more ranked (d) higher than (a) than vice versa.  (When we say significant it is not just a word or a method of expression. There are tests of hypothesis to find out whether the difference is statistically significant or a random variation). However, ‘Australia wins’ as an event includes ‘Australia wins a hard-fought match’, and should always have a higher probability than the latter.

This is an adaptation of the quite well known Linda paradox.

Question 4 actually gives a full demonstration of the way our brains fail when faced with conditional probability. The answers generally cluster around 70% to 60%. The actual answer happens to be 23.59%.  The computational formula is the following:

P (A fails given crisis) = P(crisis given A fails) x P(A fails) /[P(crisis given A fails) x P(A fails) + P(crisis given A succeeds) x P(A succeeds)]

The problem is that our brains have not been adapted for such probabilistic thinking, especially when conditional probability is involved. Now just think how complicated it should be to analyse the data of 22 performances in a match, all dependant on one another.

Mathematics too gets complicated with the introduction of a third parameter. Reflect for a moment on the Duckworth-Lewis heartburns.

As long as there were just two parameters, runs and overs, we did not have a problem with understanding target revision. We did have reservations. We knew that it was not fair to just use unitary method on the reduced number of overs. We knew it was ridiculous to exclude the most economical overs. However, we never quite had a problem figuring out how the targets were revised.

Enter a third dimension, wickets. And alas. Majority find it impossible to gauge the working behind the method. And there are suspicions, heartburns, and as is inevitable in cricketing discussions, abuse. Ironically, Duckworth-Lewis is a beautiful implementation of mathematics, exponential decay function, and the fairest available technique. But very few understand the logic behind its results.

Tell-tale signs

Now, let us consider what happens in a match?

There are 22 performances and they are not independent of each other. This means that the events (performance of a single individual) cannot be studied in isolation. There are 21 other events which amount to an onrush of conditional probabilities on anabolic steroids.

To put them on the same mathematical scale and evaluate effects each had on the match, to carry the same across all matches ever played and to evaluate rankings of players across eras… well, even for a qualified statistician this scenario is extremely difficult to determine.

One has to be very, very careful about what one wants to measure, how he measures and whether it makes sense.

Is it the effect the individual had on the result? This is a tricky one. A win generally indicates a favourable situation, a loss generally indicates more difficult circumstances. Does performance in a win mean better effect? History will tell us that it is far more difficult to score more runs or take more wickets in defeats. Is the performance the cause of wins or is the win the cause of performance? Is the performer a game-changer or is he a fair-weather player? Or is it neither, just an illusory correlation? These are not easy questions with simple answers.

At the same time, during the course of a discussion with one such proponent of single metric techniques to measure effectiveness on results, one got to know that ‘for obvious reasons’ second innings performances were given more weight than first innings.

Now, these are telltale signs of the statistically uninitiated trying to frame a solution – and producing questionable results due to either intention or ignorance.

What exactly is one trying to measure? Effect leading to wins? Or is it the difficulty or task?

If it is the former, most Tests are decided early on basis of the advantage gained in the first innings. So, more weight given to second innings performance is fallacious.

If it is the latter, then performances in losses should get far more weight than performance in wins because historically it is far more difficult to score heavily or take more wickets in lost games.

A trained statistician, on the other hand, would never attach more weight to certain parameters because of ‘obvious reasons’ or ‘cricketing common sense’. When data is present, no literary license or specialist acumen (pseudo or otherwise)  is allowed to interfere with the analysis.

First, one would decide what exactly one wants to measure — effectiveness, performance in degrees of difficulty. Next, there would be tests of hypothesis, rigorously conducted and concluded to an ascertained degree of confidence, to find out whether or not the second innings is significantly more important/effective/difficult than the first innings, or whether performance in wins are easier than performance in losses, and more.

Even scientists and statisticians go wrong, so God help the pseudo-analyst

In every field of science, practising scientists have to use statistics to prove their claims, to establish their experimental findings. The subject cuts across all disciplines. However, even scientists who take a crash course in statistics to perform their significance tests are prone to errors. Physicists, medical researchers — in other words, people of notable scientific achievements — also stumble in analysing and interpreting statistical data.

Statistics is not easy. In fact even trained statisticians make mistakes in their results. There are well known pitfalls like Base-rate fallacy, Regression to Mean, misjudging significance and so on — even if one follows statistical techniques.

When non-statisticians make attempts at analysis of data of such complicated nature, there are way too many problems to list here. The fallacies that can come out of such handling of statistical and mathematical topics can fill books, and indeed there have been several written about the subject. One specifically I would recommend is How to tell Liars from Statisticians by Robert Hooke, Markel-Dekker Publishers, 1983.

I will list a few, nevertheless.

1.    Cherry picking and misunderstanding of outliers
2.    Mixing up discrete, continuous and binary data and treating them in the same way
3.    Expecting certainty
4.    Ignorance of difference between small  and large samples
5.    Ignorance of variability
6.    Ignorance of Regression to Mean
7.    Overemphasis on averages
8.    Chipping away at data or adjusting weights to make it fit with what we want to see

If some data set shows results that are opposed to the initial claim (the claim itself often based on gut feel or cherry picked occurrences), a non-statistician usually takes course to two alternatives if he wants to carry on with numerical arguments. Either he over-scrutinises the data, chipping away at it till it reveals some facet that meets his objective. Else he calls for an entirely different test with different parameters.

In the case of a developed pseudo-mathematical model that claims to produce a miracle value for each player to rank them, this is prone to starting with an objective and playing around with the ad-hoc weights till the desired result has been obtained.

But such results are quite popular when one announces that computerised version of the pseudo-statistical method has generated the following conclusion: ‘X is the most effective player of the World Cup’.

Why does such analysis get eyeballs?

Robust statistical analysis, involving proper, rigorous and verifiable methods, cannot be shared with the general public in popular media. It is too difficult to understand.

There are some statistically or mathematically trained cricket fans who have done useful work in the field of cricket analysis, but their works are scarcely available, locked up in obscure scientific journals published mainly in England and Australia.

But there is always the desire for a simple solution when there is none, and this is a major cause for such thriving chicanery.

People love it when they see numbers quoted at random when the stakes are low.

Two minutes after midnight on October 12, 1999, Kofi Annan, the then UN secretary-general, was photographed holding the tightly swaddled Baby Six Billion in a Sarajevo hospital. It was said that Annan had just happened to be there when the Six Billionth birth took place. And an overwhelming proportion of the six-billion-strong world population swallowed this atrocious gimmick. In truth, with babies born every micro second, it was impossible to determine who the six billionth baby really was. And census has never been accurate to the digit. However, people love numbers when they are not affected by them and they don’t have to work too hard to derive them.

A similar popular technique nowadays is to produce maps of the world with different indices, ranging from happiness index to racial tolerance quotient. The media has realised it makes more sense to flash pictures to get better traffic — that holy grail of this information-cluttered cyberspace. Most of the surveys conducted for these infographic maps, from questionnaires to data analysis, generally underline supreme nonsense. However, they are ‘liked’ and ‘shared’ without much ado about going beyond the maps and validating the results in detail.

With cricket, making a slightly sensational claim and ranking cricketers achieves this wonderful purpose. This manoeuvre draws plenty of site clicks, shares and tweets. For worshippers of the various idols, these nuggets play a crucial role: they share the good news to boost up their cricketo-social standing, or to trash them in case they don’t conform to one’s preconceived notions. However, hardly anyone in our net-based community will ever take the trouble of going through the methods behind the claims. If there are such people their number is statistically insignificant.

Amateur breakthroughs are obsolete… hokum

Someone achieving some breakthrough statistical/mathematical/physics/medical work without specialised training could have been quite possible in 1750. It is well-nigh impossible today.

This is because these subjects are not in their infancy any more. The number of amateur mathematicians or statisticians doing useful work went down exponentially in the 20th century with more and more advancement of the subjects. The amount of work they have to do to get up to speed with the current scenario in order to attempt new breakthroughs is now beyond high school or college level courses.

Someone like Robert Ammann can perhaps still contribute, but it has to be in a new branch of mathematics which happens to be in its infancy… like Penrose tilings. In areas where lots and lots of work have been done, such claims are almost without exception pure hokum.

Just like there are so many claims of alternative medicine or non-medicinal cures for cancer heard of every day — from the use of pineapples to cannabis to reiki. Unfortunately, none of them stick. We have to go back to doctors, that too the ones specialising in cancer treatment.

That is why to justify the possibility of excelling in a field we know nothing about we still have to refer to Leonardo da Vinci, almost 500 years after his death — because of two reasons. One, he was an outlier as a supremely gifted individual. And secondly, repeating his brand of multi-stream genius is extraordinarily difficult in the current day and age, so there are very few recent examples.

Does it mean only statisticians understand cricket? Obviously not. Some statisticians may not have a clue which way the ball is going to turn if released with an off-spinner’s action — even if it is without the straightening jerk. Many have not played the game to judge balance and position of head during strokeplay.

Even if they are very well aware of the game and all the data in front of them, no statistician will never be able to predict how much a batsman is going to score in a particular innings.

But the difference is that they will know that they cannot predict.

At the same time, when the records are looked at in retrospect, the statistician is much better placed to determine whether a player was effective or not, whether he succeeded or failed in crisis, whether he played West Indian pacers better than anyone else or not. It is not that they are more intelligent. It is just that they have been trained in a way which makes them better adapted to interpret complex data.

Finally, a statistician will know the limits of numbers. He will know what can be done and what cannot.

So if someone claims he has found ways of putting a number to the relative effect, involving statistics with 22 conditional probabilities… and he adds that mathematical or statistical expertise is not required because this hugely complicated endeavour does not involve advanced mathematics and can be put down in the back of an envelope… well, all that a trained statistician can infer is that such an envelope is almost certain to contain only crystallised snake oil.

(Arunabha Sengupta is a cricket historian and Chief Cricket Writer at CricketCountry. He writes about the history and the romance of the game, punctuated often by opinions about modern day cricket, while his post-graduate degree in statistics peeps through in occasional analytical pieces. The author of three novels, he can be followed on Twitter at http://twitter.com/senantix)