The “Empty Product” is the product of all numbers in a group of no numbers at all.
To the uninitiated, this sounds like word salad. Mostly because it is: How can you multiply no values together? How does that even make any sense?
And on one level, it really doesn’t: We need the Empty Product for certain cases, such as 0!, but it’s usually defined in terms of “this is the value because that’s what it has to be.”
Largely because of the development of the calculator, the logarithmic scale has lost a lot of the love it used to have. The longer I teach math, the more I wonder how this loss has gotten in the way of understanding.
Today, I’m going to show why the Empty Product has the value it does by using the logarithmic scale. (If you hate logarithms, don’t worry. There are no actual logarithms being used in this article, just the scale.)
First, though, I’ll provide the more formal explanation. If you want that for background, read on. If you want to ignore that, jump to “Okay, on with the show!”
Here’s the somewhat formal explanation: Given two sets, A and B, we can calculate the product of all the elements of A and all the elements of B. If we have {3, 4} and {2, 5}, for instance, then 3 * 4 * 2 * 5 = 120. Likewise, if we have {3} and {4, 2, 5}, then the product is still 120, because we’re dealing with the same elements.
Since we’re dealing with the same elements, the product will be the same. That’s because multiplication is associative and commutative: We can rearrange the order of the elements all we please, the product will always be the same.
Hence if we have the null set and {3, 4, 2, 5}, it must be the case that the product is still 120. Since the product of the elements of {3, 4, 2, 5} is 120, the product of the elements of the null set, i.e., the Empty Product, must be a value that, when multiplied by 120, yields 120.
This is a formal way of saying that the Empty Product is the number which, when multiplied by another number, yields that number. In algebraic notation, it’s the value of b which satisfies a = a * b for all a. Which is to say, 1.
Another common explanation involves factorials. n! is the product of all integers from 1 to n. So, for instance, 4! = 4 * 3 * 2 * 1 = 24. Factorials are common in probability, and it is often the case that a formula will generate the need for 0!.
Consider: You have five students, and want to choose three for a task. How many ways can you do so? The modern factorial was originally designed to calculate the number of ways to sort objects; we can sort five objects 5! = 120 ways. If we choose three for a task, then we fail to choose the other two, and so our formula for how many different groups of three from a larger group of five is 5!/(3!2!) = 10.
For the sake of completeness, we want the formula to support any number in our subgroup from 1 to 5. If you have five students and want to choose one? That’s 5!/(1!4!) = 5. Want to choose two? 5!/(2!3!) = 10. Want to choose four? 5!/(4!1!) = 5.
What happens when we want to choose all five? Logically, that’s one. There’s only one way we can choose five students from five students. Our formula gives 5!/(5!0!), which we said has to be 1. Since 5!/5! = 1, it must be the case that 1/0! = 1, so 0! = 1.
But 0! is the product of all integers from 1 to 0… and there aren’t any.
A challenge with stating simply that 0! is the product of all integers from 1 to 0 but not providing this “but the formula says it has to be!” explanation is that we need to prevent (-1)! = 1.
A related explanation is that n!/n = (n-1)!. That is, 5!/5 = 4!, 4!/4 = 3!, and so on. So 1!/1 = 0!, which is 1, but 0!/0 is undefined, so (-1)! is also undefined.
Okay, on with the show!
So let’s consider this question from the point of view of the logarithmic scale.
We should all be familiar with the number line, which is formally called the linear scale. This is what we learned in elementary school: All the numbers are spaced out evenly, and once negatives are introduced, 0 is placed in the middle.
I say “in the middle” while acknowledging that, because this scale is infinitely long, it doesn’t technically have a “middle”. But, conceptually if not technically, the middle of the number line is 0, and it goes from negative infinity to positive infinity.
The logarithmic scale, meanwhile, goes from 0 to infinity and has a conceptual center at 1:
The linear and logarithmic scales are related in several ways: To add on the linear scale, just move that number of steps. To multiply on the logarithmic scale, just… most that number of steps.
This was the basis of the slide rule, an engineering tool that was used in various forms for centuries before the development of the modern calculator. The slide rule had other functions as well, but the common scales were two logarithmic scales that could be moved relative to each other in order to multiply two values.
Meanwhile, to multiply m and n on the linear scale, start at the middle (0) and take m steps of size n (or n steps of size m). To find the nth power of m on the logarithmic scale, start at the middle (1) and take n steps of size m (because the numbers aren’t evenly spread, though, taking m steps of size n gets you a different place, so m^n is not usually equal to n^m).
So where are you at before you start adding anything at all? You start at 0. This is where students enter math class in kindergarten. Indeed, “adding on” is considered an indicator that math knowledge is maturing, as opposed to “adding up”. Ask a child who is first starting to add what three plus five is, and they’ll start with no fingers, then count out three, then count out another five to eight (“adding up”). Ask a slightly older child, and they will typically start at three and count up five times to eight (“adding on”).
In other words, we seem to be fairly comfortable with the notion that, before we add anything, we have nothing at all. This is a large part of why, I believe, we want to say the same thing about multiplication: Before we’ve multiplied anything, we have nothing, and nothing is zero.
But if we look at in terms of the scales, we reach a different conclusion. Adding means we start in the middle of the linear scale… which is 0. Multiplying means we start in the middle of the logarithmic scale… which is 1.
One is where we start before we’ve multiplied anything at all.
The Empty Product is the result of not multiplying anything at all.
So the Empty Product is 1. The scales said so. Our method of multiplying by counting out steps on the logarithmic scale said so.