The Eley Diary, well known to many shooters, includes tables indicating the size and – indirectly – the quantity of shot required to kill game. The quantity of shot is described as a minimum number of pellets which, when fired at a target, would be contained in a 30-inch circle, centered on the area of highest pellet density.
[TBC – Give examples and an image of the table from the diary if possible.]
There is little wrong with the suggestions contained within the diary. Most of the values presented were derived empirically and are as valid today as they were when they were first published. Indeed, they are the basis of much of the understanding presented by this website, although we take a particular view on some of the figures and adapt them for our purposes with appropriate consideration.
There is, however, a common misunderstanding as to what, for example, the “120 pellets in a 30″ circle required for pheasant” actually means. Most people and much of the shooting literature take it, wrongly, to be an indication of the required pattern density: that is, a value expressing that a certain number of pellets per square inch (in this case, 0.1698 pel./in²) must impact the target for the pattern to be lethal. Whilst this might appear mathematically sensible, it is also completely wrong.
In fact, the number published in the Eley diary is a threshold value which should be exceeded by the resultant of the probability function that describes the behaviour of the shot cloud for a particular combination of gun, cartridge, choke and range. That is:
f(x) >= Z
where x is the diameter of the circle, f(x) is the probability function and Z is the threshold value for the minimum number of pellets. contained within that circle. In the case of the published figures, we can say that, for a cartridge suitable for the killing of pheasant
f(30) >= 120
but the question remains, what is the function represented by f(x)?
It is broadly accepted in shotgunning circles that the distribution of pellets around the central point of a pattern approximates the Normal distribution. This is a common probability function which is applicable to an enormous variety of real-life situations and which describes the effect sometimes known as “regression to the mean”.
The Normal distribution thus applied states that it is most likely that a pellet will be found on the axis of the shot cloud and increasingly less likely the further from that axis one looks. Since this is equally true in all directions, the Normal distribution is of an appropriate shape to model this behavior.
Of course, not all gun / cartridge / choke / range combinations produce the same patterns. However, outside of a few specific examples (the “blown” pattern for one), it is possible to describe the broad distribution of pellets in most shot clouds using a function of the Normal distribution which as been “skewed” using its two parameters mu (μ) and sigma (σ).
Understanding the Normal Distribution
Consider the chart displaying the “standard” Normal distribution function shown below. The curve indicates, via the values on the Y-axis, the probability of encountering an “event” in a Normally-distributed data set having the value indicated by the X-axis. The area under the curve, which extends to infinity in both directions, is 1.0, representing the probability of an event occurring between those two, infinite limits.
More usefully, one can determine mathematically, the probability of an event occurring which has a value between two fixed limits. For example, the likelihood of an event whose value is between -1.0 and 1.0, occurring in a data set whose probability is described by the standard Normal distribution, is 68.72%, which is represented by the shaded area in the following graph:
The general shape of the Normal distribution is fixed by definition, but the shape of individual functions of the distribution can change and are described by two parameters. Mu is the average of the probability distribution – that is, the average of all the values in any large sample chosen at random. Sigma is the standard deviation of the distribution, which determines how “stretched” or “squashed” the distribution is.
In mathematics and physics, these values can be manipulated to scale the Normal distribution to a real-world data set.
Using the Distribution
In shotgun patterning, it is most convenient to set mu to 0, representing the shot cloud axis, with the X-axis beneath it representing the displacement of individual pellets from that axis. Sigma is directly related to overall pattern performance and has a particular relationship to choke, which will be elucidated below. Its value must be discovered empirically, by patterning a given combination of gun / cartridge and choke at a given range.
The result of discovering the mu and sigma values for a particular combination is that it becomes possible to predict the likelihood of any given pellet ending up X distance from the center of the shot cloud. This can aid in the analysis of patterns where predicted pellet counts can be used to determine whether a cartridge has a particularly “hot” or “cool” pattern center and allows the mathematically-correct comparison of the quality of patterns of differing sizes (e.g. pellet counts from 20″ and 30″ circles) and indeed, of patterns of non-standard size with the standardized published threshold values.
A Worked Example
To test out the examples presented here, readers might like to use the Normal distribution calculator presented here; see the end of the article for the citation- Ed.
Scenario: On 11th June 2017, The Hedgewalker tested the Lyalvale Express “Supreme Game” .410 cartridge containing 9g of #6 shot. Both 20-yard patterns shot using the cylinder choke of the test gun produced pellet counts of 57 in a 20″ circle, representing 56% of the original shot charge.
Question: Is this level of performance equal to Eley’s minimum standard for killing pheasant of 120 pellets in a 30″ circle?
The Wrong Approach: Equivalence of Pattern Density
Let’s take the wrong approach first and treat Eley’s number as an indication of pattern density rather than a resultant from a distribution function. If this is a pattern density, then what we’re saying is that in the 707 in² of a 30″ circle, there should be 150 pellets, which equates to a density of
150 pel. / 707 in² = 0.1698 pel./in².
Compare to that our pellet count of 57 in a 20″ circle, whose area is approximately 314 in²:
57 pel. / 314 in² = 0.1815 pel./in².
This exceeds the 0.1698 pel./in² required by the threshold data and so we might conclude that the cartridge is suitable for pheasant. Unfortunately, this is not the case.
The Right Approach: One Probability Function to Rule Them All
Now let’s treat Eley’s number as the resultant of a Normal probability function. Because the threshold of 120 in the circle has been empirically determined, it doesn’t really matter how we get 120 pellets into the standard circle, proving that we do, so we don’t need to know exactly which combination of gun, cartridge and choke gave rise to that figure.
In fact, where the Lyalvale cartridge is concerned, we will never be able to get 120 pellets into the circle, because the cartridge only contains an average of 102 pellets to start with. What we can do, however, is to ask whether, in an area of reduced size (e.g. the 20″ circle), the performance expressed by the probability function for 20″ would exceed the threshold value with a 30″ pattern and therefore gives the required level of performance, albeit, over a smaller effective pattern area.
To answer this question, we use something called the Inverse Normal Probability function to work backwards from a probability, to the values for mu and sigma, which tells us exactly what the probability function for our gun, cartridge, choke, range and a 20″ circle actually is. We can then:
- use this function to find the probability of a pellet falling within a 30″ circle under the same conditions
- multiply that probability value by the number of pellets originally in the cartridge to predict how many impacts would be seen in a 30″ circle of identical character
- compare the resulting number with our threshold value to establish that the performance represented by our function exceeds, or does not exceed the required level of performance.
Thus, if we take again the example of the Lyalvale cartridge above, we fix the value of the mean, mu, at 0.0 and find that, to put 56% of the pellets from our cartridge into a 20″ circle at a range of interest, the standard deviation, sigma, of the probability function is equal to 12.95.
We then expand the range of interest for the same function to -15 to +15, representing the 30″ circle, resulting in a 75.33% probability of a pellet impacting that area.
Multiplying the number of pellets in the cartridge (102) by the calculated probability of impact in a 30″ circle – 75.33% – gives a predicted pellet impact count of approximately 77 which is far below the 120 pellet threshold.
What does this mean?
The simplest and most obvious conclusion to be drawn from this analysis is that it is very unlikely that the Lyalvale cartridge would be suitable for use against pheasant at any ordinary (e.g. 20-35 yards) range. We hasten to add that we recognize that one could fire such a cartridge at a January cock-bird and bowl it over without incident – we just don’t think that it would happen that way very often.
The Eley “thresholds” are not themselves absolute: they represent standardized results from of a set of probability functions which describe – we understand – a 95% likelihood of killing the bird if the shot is on target. The fact that the probability of an occurrence does not meet a 95% confidence threshold does not by any means make it impossible.
Perhaps in our case, we would find the Lyalvale shell killed the bird six times in ten – but how many of us would be happy with dispatching 40% of our birds by hand?
Beyond that, we are left with questions of effective pattern area and this is where the concept of linear pattern density rears its ugly head once again.
The SmallBoreShotguns team, collectively, feels that calculating linear pattern densities risks introducing mathematical flaws which mask or misinterpret the performance of a gun, cartridge and choke combination and that to do so without being absolutely sure that the approach is appropriate makes that risk a likelihood. Indeed, it is safer not to employ them at all.
Further discussion of this point may merit a follow-up article, but let us repeat the example above: linear pattern analysis wrongly suggested that the cartridge examined here not only met, but exceeded the required performance threshold to be effective on 95% of pheasant attempted; the mathematically-rigorous approach showed that the it provides barely 65% of the required performance.
The Real World
Perhaps readers will be asking at this point “is it that simple?”. Unfortunately – it is not. Focusing again on the relationship between the standard 30″ circle and it’s oft-used 20″ counterpart, we should point out the differences between a pure mathematical analysis suggests and what empirical testing will show.
The table below gives for 30″ patterns of known percentage, the empirically-derived expected pattern percentages for the central 20″ area and the area outside of that central circle, extending to 30″ diameter. These are taken from research by Ed Lowry who did a huge amount of work surrounding shotgun ballistics and do broadly represent the performance in the majority of cases, within a few percent variation either way.
Beside the “Lowry” numbers are listed the equivalent percentage distribution of pellets between the two areas as predicted by the Normal distribution and (for entertainment value only) the percentage of pellets which a linear extrapolation of pattern density would predict according to the ratio of the areas.
|30" Pattern Percentage||Lowry Empirical|
Two things are immediately clear from this table. The first is that linear extrapolation of pattern percentage by the ratio of areas is completely mad, suggesting that more pellets should find themselves in the outer 10″ of the standard circle than in the central 20″ diameter. We hope that readers will have enough experience by now to realize that almost any approach related to the ratios of pattern areas for predicting performance is a scientific and mathematical nonsense.
The second, more significant feature of the data is that, although the values for the Normal distribution track the values from Lowry, there are significant discrepancies between them, with the Normal distribution consistently over-estimating central pattern density.
Let us be clear: we stated above that the distribution approximates reality and we do not depart from that, but the pure distribution function does not match the data from Lowry, which – as far as the SmallBoreShotguns team is concerned – is to be trusted.
The reality is therefore more complicated than a single mathematical distribution can describe. We note that the shape of the two curves is simillar, with both showing a tailing off of performance in the 20″-30″ ring at 80% overall pattern density. In fact, if one applies a correction factor to the values for the 20″ pattern percentages for the Normal distribution of 0.84, the figures end up very close to those recorded by Lowry, but even then this is not perfect. We are aware that one manufacturer of ballistics software has taken precisely this approach to “fix”their shotgun pattern analysis and generation software.
Ultimately, the probability of a pellet having a certain displacement from the center of a shot cloud is Normal-like, but not Normal. Using the distribution to predict pattern density for larger or smaller circles for a known pattern will give better results (i.e. results closer to reality) than linear interpolation, but will tend to under-estimate when moving from a smaller circle to a larger and over-estimate in the opposite case. Given general practice in patterning, this is probably preferable to reverse situation seen with linear interpolation, where a wholly inadequate cartridge can be misidentified as suitable for a given quarry.
Furthermore, we suspect that there are factors, such as pellet deformation and the other features affecting shotgun performance which contribute to the wider spread of shot in the shot cloud than the Normal distribution predicts. We suspect that were Lowry’s numbers for steel and other harder shot metals available, that they would show a much nearer relationship to the theoretical values. We also suspect that his numbers will break down for extremes of range and shot size – particularly where small shot is concerned – and for the larger gauges (e.g. 4, 8) where proportionately less of the shot charge is ever in contact with the barrel wall.
Finally, we wonder whether blown patterns and the occasional “doughnut” patterns that we see are actually described by a Normal distribution with a non-zero mu value – but these are all questions which we cannot currently answer, best left for another day and another article.
We are grateful to the providers of Online Statistics Education: A Multimedia Course of Study which can be found at http://onlinestatbook.com/ and the project leader for that website, David Lane, for the use of their software to create some of the illustrations for this article.