In 1993, Wayne James Nelson was accused of trying to defraud the state of Arizona of $2 million. Nelson, employed by the state, wrote 23 checks to a fictitious vendor in seemingly random amounts. But his plan had a major flaw: the amounts weren’t random enough.
From just these numbers, the state deduced that the amounts had likely been faked. What makes these numbers suspicious?
To answer this, let’s take a detour into a story of two scientists, logarithms, and dirty books.
Simon Newcomb was an astronomer, mathematician, and polymath living in the late nineteenth century. In those pre-calculator days, mathematicians relied on reference books like Adriaan Vlacq’s Arithmetica Logarithmica. This was a book whose sole purpose was to provide pages upon pages of numbers–pre-calculated logarithms, to ten decimal places, of every natural number from 1 to 100,000. (Years later, some 603 errors were found. To think that someone found these errors among the 2 million digits contained is remarkable.)
One day, while perusing one of these logarithm books at the library, Newcomb noticed something peculiar–the pages were dirtiest at the beginning, and progressively became cleaner throughout.
You might not think this is a very profound discovery, and you would be right. Dirty pages are not intrinsically interesting. But Newcomb was not one of us–he was a nineteenth century mathematician. For him, this pattern of dirt was fascinating, and sparked a desire to investigate further.
He eventually found some patterns to the madness that ran deeper than the dirt on the surface. First, the dirt was not just any dirt; it was dirt from overuse and the touching of countless fingers. And it was not simply that earlier pages were dirtier; instead, pages with numbers beginning with 1 were the dirtiest, followed by those with a starting digit of 2, then 3, and so on. The cleanest, least used pages were those with numbers beginning with 9.
This was far more interesting than a mere dirt gradient. Were scientists and mathematicians looking up numbers starting with 1 much more often than those starting with 9?
Dirty Books, Again?
Newcomb published an article about this phenomenon in 1881, even going so far as to describe a precise law for the probability that a number–any number you might find–starts with a certain digit. By his reasoning, the probability of a number beginning with a digit d is given by:
This implied that you would find numbers starting with the digit 1 about 30 percent of the time, and with the digit 9 a mere 4.6 percent of the time.
It was a profound theory–sadly, Newcomb’s article went largely unnoticed.
Luckily, fifty-seven years later in 1938, a physicist by the name of Frank Benford was around to ensure the obscure theory’s prosperity. In a quite literal example of history repeating itself, Benford–apparently unaware of his predecessor–made the exact same discovery as Newcomb through the exact same observation. He, too, found dirtier pages at the start of logarithm books, and even derived the same formula. (Evidently, dirty pages are more interesting than we think.)
But unlike Newcomb, Benford had the means to test his theory empirically. He spent several years gathering 20,229 observations from a diverse range of data sets: baseball statistics, areas of rivers, atomic weights of elements–even arbitrary numbers plucked from Reader’s Digest articles.
This mashup of all kinds of disparate data happened to be a near-perfect fit with the theory: 30.6% all the numbers in the set began with 1 (compared with an expected 30.1%), 4.7% began with 9 (4.6% expected), and the digits between appeared with gradually decreasing frequency.
Benford’s findings were striking. His article on the topic, unlike Newcomb’s, spread quickly through the academic world, and the peculiar phenomenon of first-digit distributions soon became known as Benford’s law.
Perhaps Newcomb deserves more credit for discovering the law half a century earlier. Unfortunately for him, as made poignant by Stigler’s law of eponymy:
No scientific discovery is named after
its original discoverer.
In any case, the name of the law tells us nothing about why we observe it or where it applies, which are the real puzzles here. Let’s keep digging.
Newcomb’s Benford’s Law
Despite many attempts by mathematicians to prove why Benford’s law exists, none has been entirely successful. One promising development, however, comes from the assumption of scale invariance, the idea that any universal law about digits should naturally be independent of the units used. If we convert from feet to meters, we should still see a similar distribution of first digits.
Empirically, this turns out to be true. As an example, let’s pick a random set of data–say, the masses of 618 planets in kilograms.
Here is a chart comparing the first digits of those masses against Benford’s law:
As we can see, the data follow Benford’s law quite well, though not perfectly. Here is another comparison using the same data, except the masses are expressed as multiples of the Earth’s mass:
Once more, this time expressing the planetary masses in units of American Big Macs (we specify American because, apparently, Big Macs vary slightly in weight across countries):
In this last case, we see a slightly poorer fit of the data with Benford’s law, especially for the digits 3 and 4. But, keeping in mind that our initial data in kilograms was also slightly off, this is a reasonable deviation. With a larger pool of data, we would likely have found a closer fit regardless of scale.
We should note, though, that even with this deviation, it is still quite clear that the majority of the values begin with lower digits. The similar trends across these three charts suggests that Benford’s law exhibits the properties of scale invariance, and many researchers have found similar results.
So, the law appears to apply for the same dataset expressed in different units. But, this still doesn’t explain why we would see the same pattern in a large range of datasets.
After all, what do baseball statistics have in common with atomic weights, or numbers found in a copy of Reader’s Digest?
Jurisdiction of the Law
Benford’s law fits with many datasets, but not all. For example, telephone numbers have a more uniform distribution of first digits, since area codes are assigned (and thus chosen to fit a certain distribution) rather than naturally occurring. But is it only artificially-created data that does not fit Benford’s law?
Unfortunately, we have no systematic method to predict if a given dataset will fit the law. Not all hope is lost, however; with some new knowledge, we can make a pretty good guess.
Ralph Raimi, a mathematician from the University of Rochester, made a keen observation: even if you start with a variety of datasets that do not fit Benford’s law, the combination of all of them will often lead to a close fit.
Therein lies the key: the sets of numbers that most closely follow Benford’s law are those that come from a wide range of underlying distributions.
In general, if you take random samples from random distributions, then your data will converge to fit the law.
Intuitively, you could say that the numbers contained in different articles in a Reader’s Digest issue, being practically unrelated, would fit this condition almost perfectly. This is also the case with stock prices, with each company’s performance being largely independent of others.
This is a promising theory as to the nature of data that fits Benford’s law. With this in mind, we can (finally!) return to the case of Wayne James Nelson, accused of fraud by the Arizona state government.
Let’s take another look at the amounts of the fake checks.
Given that the checks were all made to the same vendor, we cannot, without a bit of thought, assume that Benford’s law applies based on the theory just discussed. For example, if we imagine a vendor that sells only a single product or service, we might find that its prices tend to hover around the same point. In other words, its prices may come from the same underlying distribution, which may be skewed in a way that does not conform to Benford’s law.
But we might be able to refine our guess by taking a closer look at the details of this particular case. What can we deduce?
Well, the nature of a governmental organization, such as the Arizona state government, is to purchase from larger, more mature vendors. Larger vendors will typically have a more diverse portfolio of products and services, leading to more diverse pricing schemes and price distributions. Then, in such a case, Benford’s law would be more likely to apply.
Even then, it’s still not guaranteed.
Indeed, applying Benford’s law broadly without thinking can often lead to false positives. But that’s not necessarily a bad thing; after all, you choose a tool if it fits the job. False positives are acceptable in a first-pass detection of suspicious activity–which is precisely how Arizona state accountants applied the theory to uncover Nelson’s curious checks.
Looking at the numbers, it appears that Nelson’s payments were specifically chosen to be under $100,000, a level at which human approval would have been required. If not for a broad application of Benford’s law to flag suspicious data, his checks may have passed under the radar.
And his checks are, indeed, highly suspicious. Even from a quick glance at the values, only one of the twenty-three values begins with 1, yet twenty-one begin with the digits 7, 8, or 9. Comparing the amounts in a proper test against Benford’s law reveals a significant mismatch.
Again, this doesn’t necessarily indicate fraud on its own; the application of Benford’s law only served to cast a spotlight on Nelson’s activity. But once the checks were noticed, it only took a quick human investigation to realize the true nature of the payments.
You may have guessed the outcome by now. The checks were not just going to any bogus vendor–Nelson was the bogus vendor, and he was depositing the checks in his own bank account.
In the end, Benford’s law found the numbers suspicious, humans found the numbers fraudulent, and Nelson found himself in prison.
Thanks to a couple of mathematicians and some dirty books, tax people now have fancy ways of detecting fraud, or figuring out if you really just “accidentally” missed a few thousand dollars on your tax return.
But thanks to this story, you, too, have a new tool in your belt to help you outsmart those sneaky money grabbers–when you make up numbers, make sure a whole bunch of them start with 1.
If you’re not going to follow the law,
at least follow Benford’s law.
Next story: Finite Monkeys Don’t Type
Benford, Frank. “The Law of Anomalous Numbers.” Proceedings of the American Philosophical Society 78, no. 4 (March 31, 1938): 551–572. doi:10.2307/984802.
Cleary, Richard, and Jay C. Thibodeau. “Applying Digital Analysis Using Benford’s Law to Detect Fraud: The Dangers of Type I Errors.” Auditing: A Journal of Practice & Theory 24, no. 1 (2005): 77–81.
“Exoplanet Orbit Database | Exoplanet Data Explorer.” Accessed April 14, 2013.
Glaisher, J. W. L. “On Errors in Vlacq’s (often Called Brigg’s or Neper’s) Tables of Ten Figure Logarithms of Numbers.” Monthly Notices of the Royal Astronomical Society 32 (May 1, 1872): 255–262.
Hill, T. P. “The First Digit Phenomenon: A Century-old Observation About an Unexpected Pattern in Many Numerical Tables Applies to the Stock Market, Census Statistics and Accounting Data.” American Scientist 86, no. 4 (July 1, 1998): 358–363. doi:10.2307/27857060.
Nigrini, Mark. “I’ve Got Your Number.” Accessed April 14, 2013. http://www.journalofaccountancy.com/issues/1999/may/nigrini.
Raimi, Ralph A. “The Peculiar Distribution of First Digits.” Scientific American 221, no. 6 (December 1969): 109–120. doi:10.1038/scientificamerican1269-109.
“Tomash Collection Images.” Accessed April 14, 2013.