Number values and multiple representations

One of the questions my mind keeps returning to is: What is a number?

I don’t mean this in a high-level set theory way. I’m not talking about aleph-numbers or other sorts of concepts. I’m restricting my thoughts here to the sort of numbers that are the domain of high school mathematics, nothing trickier than complex numbers. For this post, I’m going to focus only on the real numbers.

Consider 0.999… = 1. This seems to be one of the more persistently confusing things about mathematics. Brett Berry offers an explanation, which I’ll use to exemplifies some concepts. The inimitable Vi Hart offers a fairly exhaustive explanation of why it’s true, but I’m more interested in why so many people insist it’s false and have to be convinced.

Berry gets at the heart here: “You know that two different numbers, should in fact be different numbers.” This implies that 0.999… and 1 are different numbers, but are they? Hart, for her part, refers to them having the same value, in the same way that 1/2 and 0.5 have the same value. However, this doesn’t address the issue of what a number is (and this may well be Hart’s intention).

The mathematical approach would be to get rigorous about our terms and avoid “number” altogether. What 0.999… = 1 means is “the numeric value represented by the symbol 0.999… is the same as the numeric value represented by the symbol 1.” This is fair enough. We can talk of symbols (1, 0.999…, 4/4, $$i^4$$) and we can talk of numeric values (these all have the same, assuming base ten). Which is the number? It doesn’t matter, if we excise the word entirely and stick with “value” and “symbol.”

I would argue, though, that this is not needed, and even distracting. I think “number” refers to the numeric value. This is a thorny issue in its own right: Does it include sign? Does it include unit?

I’ve discussed sign in a previous item: -3, +3, and ±3 are three different numbers, although the last two are usually written the same way (as 3). This ambiguity is generally irrelevant, but there are times when it’s problematic. Some apparent paradoxes, for instance, are based on exploiting $$\sqrt(x^2) = |x|$$ in mathematical sleight of hand.

As far as unit goes, it seems obvious that 500 cm = 5 m. The values of 500 and 5 are clearly different, but the lengths represented by 500 cm and 5 m are the same (ignoring the precision that a physicist would point to). But this raises another question: Does the equality sign separate two symbols for the same numeric value, or does it separate two symbols for the same measurement?

This isn’t as trivial an issue as it may seem. We can say that 500 cm = 5 m and that 500 / 500 = 5 / 5. One way of looking at fractions (encouraged by the Common Core) is to see the denominator as a unit. If it’s a unit, and the numeric value doesn’t include the unit, then why does it look like a number? How can we say that division is the inverse of multiplication if it involves units, not numbers?

When we sweep these questions under the rug, when we fail to at least examine them (even if we don’t come up with answers), we lose opportunities to see why students are confused.

A common thing to do when teaching students how to multiply a fraction and an integer is to turn the integer into a fraction: $$4 = \frac{4}{1}$$. This allows students to then apply a consistent algorithm: Multiply across the top, multiply across the bottom. This algorithm can’t be applied if there’s no bottom, after all.

I see high schoolers clinging to this idea. At some point, students ought to be able to skip this step, but my students consistently resist doing so. This and other confusions suggest to me that students see different realms of numbers: Integers, fractions, decimals, and percentages. Moving between these realms involve some sort of algorithm.

Somewhere, then, we teachers are missing a key. The introduction of fractions, decimals, and percentages ought to be the opportunity to teach about multiple representations. The idea of equivalent fractions, ditto. Students seem to be learning that 1/2 is a “better” representation of 2/4, or even that 2/4 is a disguise for 1/2. And since students overwhelmingly prefer decimals, that means that 0.5 is the “ideal” form, and that 1/2 and 2/4 are inferior to it.

I will evoke my Inner Plato and argue that there is some value, which we would represent graphically by a bar half the length of our unit on a number line, and that we would represent symbolically as 0.5, 50%, 1/2, 2/4, sin(30 deg), or any other of an infinite number of ways. When we say that 2/4 = 0.5, we’re not saying that the “number” is 0.5. 0.5 and 2/4 are both symbols for the same ideal value. There are cases where it’s most useful to write this number as 2/4 (such as when we’re adding it to 1/4), and times when it’s most useful to write it as 0.5, or as sin(30 deg).

These explanations are not wanting in mathematics education. So either we’re doing them incorrectly or at the wrong time, or it’s not something that can be taught. If it’s not something that can be taught, then there’s no point in dwelling on them for any extended period of time: Students will either pick up on it, or they won’t. If it’s something that can be taught, though, what are we doing wrong?

Let’s return now to 0.999… = 1. Why is this so difficult? Most explanations rely on some notion of the infinity of digits yielding an infinitesimal difference. This results in people insisting that 0.999… < 1, even if that difference is infinitesimally small. Is the concept of limits getting in the way of understanding this?

People overall seem to have less trouble with 0.333… = 1/3. There are ways to explain this in a straightforward way: 1.0 / 3 = 0.3 with 0.1 left over. Using the same reasoning for 0.999… = 1 relies on misapplying division: 1.0 / 0.1 = 0.9 with 0.1 left over, but we’re supposed to take as many as we can: We can’t leave a remainder that’s the same or greater than the divisor. That’s one issue.

However, I think another issue is that we tend to see something sacred about the integer representation of an integer number. Students struggle with 4/4 as well. In this mindset, integers don’t have decimal or fractional representations: They’re integers.

The notation is not the mathematics, though. The symbol 1 is no more “the value one” than 0.999… or 5/5 or 1/1 or (1/2 + 1/2) is. They’re all representations for the same value, and they all have their uses in their own contexts.

Perhaps part of the issue is that way in which we treat equations in the early grades. We teach that what comes after the equal sign is the end goal. 4 + 3 = 7 is presented in very early grades as “four things combined with three things is the same as seven things,” which is phylogenetically true, but which represents a simplistic understanding of numeracy. At some point, mathematically adept people should move beyond that understanding, at least unconsciously, in favor of a more mature understanding that 4 + 3 and 7 represent the same value, which can then be dissected or manipulated as needed.

Clio Corvid

This site uses Akismet to reduce spam. Learn how your comment data is processed.