Skip to main content

On infinite decimal expansions, missing numbers, and generating functions

(This post is a cleaned up and expanded version of this thread.)


A cool fact I've seen shared around the internet a few times: The decimal expansion of \( 1/998001 \) starts with

\[ \frac{1}{998001} = 0.000001002003\dots 996997999\dots \]

That is, it begins with three-digit strings from \( 000 \) to \( 999 \), in order, except that it skips \(998\) for some reason.

The first thing to observe is that \( 998001 = 999^2 \).

Recall the formula for the infinite geometric series:

\[ \sum_{n=0}^{\infty} r^n = \frac{1}{1 - r}. \]

If we differentiate both sides with respect to \( r\), we get

\[ \sum_{n=1}^{\infty} nr^{n-1} = \frac{1}{(1 - r)^2}, \]

and multiplying by \( r \) gives

\[ \sum_{n=1}^{\infty} nr^n = \frac{r}{(1 - r)^2}. \]

(This can also be obtained by some series manipulations.)

Now, take \( r = 0.001 \). We have

\[ \frac{0.001}{(1 - 0.001)^2} = \frac{1000}{998001} = 0.001 + 0.000002 + 0.000000003 + \dots \]

From here, the appearance of the numbers from \( 001 \) to \( 997 \), at least, is clear. What changes at \( 998\)?  To see this, we look ahead a couple of places, zooming into this part of the sum. We will write it vertically for visualization purposes.

\[ \begin{align*}&\quad \, 0.001 \\
&+ \\
&\, \vdots \\
&+0.000\dots 997 \\
&+ 0.000\dots000998 \\
&+0.000\dots000000999 \\
&+0.000\dots00000000{\color{red}1}000 \\
&+ \dots \\
&\,\vdots \\
&\overline{\quad \, 0.001\dots 997999000\dots\quad\quad}
\end{align*} \]

As can be seen above, the \( 1000 \) term spills over into the \( 999 \) term, so when they are added, the \( 999 \) becomes \( 1000 \), and a \(1 \) is in turn carried over to the \( 998 \), yielding \( 999 \). Nothing else before is affected. This explains the missing \( 998 \).

Of course, the above is the expansion of \( 1000/998001 \), which isn't quite our original fraction, and the decimal expansion is missing the \( 000 \) string at the beginning. To get the fraction in our original claim, \( 1/998001 \), we move three decimal places, which indeed accounts for the missing \( 000 \).

More generally, the decimal expansion of \( 1/(10^d - 1)^2 \) encodes all \( d \)-digit strings from \( 00\dots 0 \) to \( 99 \dots 9 \) except for \( 99 \dots 8 \).


Let's look at our old friend \( 1/7 = 0.\overline{142857} \) again. Notice that \( 28 \) is two times \( 14 \), and \( 57 \) is almost two times \( 28 = 56 \) but overshoots by one. Normally, this could be chalked up to coincidence, and for many years, I thought it was. But, it turns out there's a nice explanation! Note that

\[ \frac{1}{7} = \frac{0.14}{1 - 0.02} = 0.14 + 0.0028 + 0.000056 + 0.00000112 + \dots \]

The first two terms of the sum explain why \( 14 \) and \(28 \) show up. And similarly to a while ago, the reason we have \( 57 \) and not \( 56 \) is because the next term, corresponding to \( 112 \), spills over.


One of my favorite variants of the above, if a bit more niche: Notice that

\[ \begin{align*} \frac{1}{89} &= 0.0112359... \\
\frac{1}{9899} &= 0.000101020305081321345590... \\
\frac{1}{998999} &= 0.000001001002003005008013021034055089144233377610988 \dots \end{align*} \]

That is, these fractions seem to encode the notorious Fibonacci sequence \( \{ F_n \}_{n=1}^{\infty} = F_1, F_2, \dots\), a sequence of integers defined by \( F_1 = F_2 = 1, F_{n+1} = F_{n} + F_{n-1}\). That is, each term is the sum of the previous two. The first few terms are

\[ 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \dots \]


Before we go too deeply into that, I'll relate a math problem that I was surprised turned up in a local high school contest a couple of years ago: What is the value of the infinite sum

\[ \frac{1}{2} + \frac{1}{4} + \frac{2}{8} + \frac{3}{16} + \frac{5}{32} + \dots ? \]

It's pretty reasonable to suppose that the sum converges in the first place. Let \( S \) denote the value of this sum. The relation given above seems to be a clue: you somehow want to line up the \( F_n \) and \( F_{n-1} \) terms. To do this, we divide everything by \( 2 \), shifting the \( F_n \)s to the right:

\[ \begin{align*} \frac{S}{2} &= \frac{1}{4} + \frac{1}{8} + \frac{2}{16} + \frac{3}{32} + \dots  \end{align*} \]

Combining the two gives

\[ \begin{align*} S + \frac{S}{2} 
&= \frac{1}{2} + \left(\frac{1}{4} + \frac{1}{4}\right) + \left(\frac{2}{8} + \frac{1}{8}\right) +  \left(\frac{3}{16} + \frac{2}{16}\right) + \dots \\
&= \frac{1}{2} + \frac{2}{4} + \frac{3}{8} + \frac{5}{16} + \dots \end{align*} \]

This time, it looks almost like the first sequence, except that the \( F_n \)s have been shifted to the left; another way of viewing this is that the denominators have been divided by \( 2 \), or the original sum has been doubled. Well, almost. We have, to compare,

\[ 2S = 1 + \frac{1}{2} + \frac{2}{4} + \frac{3}{8} + \frac{5}{16} + \dots \]

Notice that these expressions differ by a term of \(1\). That is,

\[ S + \frac{S}{2} = 2S - 1. \]

We can now solve for \( S \); we have \( S = 2 \), and so

\[ \sum_{n=1}^{\infty} \frac{F_n}{2^n} = \frac{1}{2} + \frac{1}{4} + \frac{2}{8} + \frac{3}{16} + \frac{5}{32} + \dots = 2. \]

More generally, we can consider the sum

\[ S = \sum_{n=1}^{\infty} F_nx^n. \]

Repeating the process above, but replacing \( 1/2 \) with \( x\), we have that, assuming the convergence of all relevant series, \( xS + S = S/x - 1 \), which rearranges to \( S = x/(1 - x - x^2) \). That is,

\[ \frac{x}{1 - x - x^2} = \sum_{n=1}^{\infty} F_nx^n. \]

More generally, suppose we have a sequence of numbers \( \{ a_n \}_{n=1}^{\infty} = a_1, a_2, a_3 \dots \). The function

\[ f(x) = \sum_{n=1}^{\infty} a_nx^n = a_1x + a_2x^2 + \dots \]

is said to be the generating function of the sequence \( \{ a_n \}_{n=1}^{\infty} \).

The above tells us that the function \( x/(1 - x - x^2) \) is the generating function of the Fibonacci sequence. Our previous discussion tells us that \( x/(1 - x)^2 \) is the generating function of the positive integers. Note that nothing stops us from starting at \( n = 0 \) instead of \( n=  1\), or whichever index, so in this light we can view \( 1/(1 - x) \) as the generating function of the constant sequence \( \{ a_n \}_{n=0}^{\infty} \) given by \( a_ n = 1 \) for all nonnegative integers \( n \).


We can now go back to the matter of the decimal expansions. What the above says is that the fractions

\[ \frac{1}{10^{2d} - 10^d - 1} \]

encode all but the last Fibonacci number with at most \( d \) decimal digits. And this can be seen by taking \( x = 10^{-d} \) and plugging it into our generating function; we get that

\[ \frac{10^d}{10^{2d} - 10^d - 1} = \sum_{n=1}^{\infty} F_n 10^{-nd} = 0.00\dots 100\dots 100\dots 2 \dots \]

By a similar observation as in above, the smallest Fibonacci number with \( d + 1 \) digits spills over into the largest Fibonacci number with at most \( d \) digits, explaining what is going on. And as before, the missing \( 00\dots 0 \) string is accounted for by multiplying by a factor of \( 10^{-d} \).

There is a minor detail. What if there is more than one overflow? Say, the term corresponding to the first \( (d + 1) \)-digit Fibonacci number overflows into the term corresponding to the largest \( d \)-digit Fibonacci number, which then becomes a \( (d + 1) \) digit number and overflows into the term before it. This would happen exactly when the largest \(d \)-digit Fibonacci number is \(10^d - 1 \). Fortunately, a paper of Bugeaud, Luca, Mignotte, and Siksek shows that this is never the case.


There are more ways in which studying the above generating function is useful. It is an established result that any rational function on the reals can be decomposed into simpler rational functions, whose denominators are powers of irreducible polynomials with degree at most \( 2 \). (And if we want to consider rational functions on \( \mathbb{C} \) instead, we can stipulate that the denominators are all powers of linear factors.) We have that \( 1 - x - x^2 = -(x^2 + x - 1) \) has roots \( \alpha = (-1 + \sqrt{5})/2 \) and \( \beta = (-1 - \sqrt{5})/2 \). So, we can write

\[ \frac{x}{1 - x - x^2} = \frac{A}{x - \alpha} + \frac{B}{x - \beta} \]

for some constants \( A, B \). Solving gives \( A = -\alpha/(\alpha - \beta), B = \beta/(\alpha - \beta) \). Note that \( \alpha - \beta = \sqrt{5} \). We have

\[ \begin{align*} \frac{x}{1 - x - x^2} &= \frac{1}{\sqrt{5}}\left( -\frac{\alpha}{x - \alpha} + \frac{\beta}{x - \beta} \right) \\
&= \frac{1}{\sqrt{5}}\left( \frac{1}{1 - x/\alpha} - \frac{1}{1 - x/\beta} \right) \end{align*} \]

We now expand the right-hand side into geometric series to get

\[ \frac{x}{1 - x - x^2} = \sum_{n=1}^{\infty} \frac{1}{\sqrt{5}}\left( \frac{1}{\alpha^n} - \frac{1}{\beta^n} \right)x^n. \]

However, we already agreed that \( x/(1 - x - x^2) = \sum_{n=1}^{\infty} F_nx^n \); this forces us to conclude that

\[ F_n = \frac{1}{\sqrt{5}}\left( \frac{1}{\alpha^n} - \frac{1}{\beta^n} \right). \]

This gives us an explicit formula for \( F_n \) that does not depend on knowing any previous values!

We can polish this presentation a tiny bit more. Notice that \( 1/\alpha = (1 + \sqrt{5})/2 = \varphi \approx 1.618 \), the infamous golden ratio; we also have that \( 1/\beta = (1 - \sqrt{5})/2 = \psi \), the other root of the quadratic equation \( x^2 - x - 1 \). With this we write

\[ F_n = \frac{\varphi^n - \psi^n}{\sqrt{5}}. \]

This identity is known as Binet's identity, named after Jacques Philippe Marie Binet, though supposedly Euler, de Moivre, and Daniel Bernoulli had already known about it.


This technique of encoding sequences in generating functions in order to study them is a powerful one. A taste of some more advanced stuff, to end: Denote by \( \mathcal{H} \) the complex upper-half plane, that is, the set of complex numbers with positive imaginary part. Take \( \Gamma \) to be the set of \( 2 \times 2 \) integer matrices \( \begin{pmatrix} a & b \\ c & d \end{pmatrix} \) with \( ad - bc = 1 \), that is, of determinant \( 1 \).

Now, for \( \gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \) and \( \tau \in \mathcal{H} \), let \( \gamma\tau = (a\tau + b)/(c\tau + d) \). This action is consistent with matrix multiplication: we have for any \( \tau \in \mathcal{H} \) and \( \gamma_1, \gamma_2 \in \Gamma \), \(\gamma_1(\gamma_2\tau) = (\gamma_1\gamma_2)\tau \). It can also be shown that if \( \tau \in \mathcal{H} \) and \( \gamma \in \Gamma \), then \( \gamma\tau \in \mathcal{H} \) as well.

Let \( k \) be a positive integer, and let \( f \) be a holomorphic (complex differentiable) function defined on \( \mathcal{H} \). We say that \( f\) is a modular form of weight \( k \) if it satisfies the following conditions:

  1. For all \( \tau \in \mathcal{H} \) and \( \gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix} \in \Gamma \), \( f(\gamma\tau) = (c\tau + d)^kf(\tau) \);
  2. as \( \operatorname{Im}(\tau) \to \infty \), \( |f(\tau)| \) is bounded.

Note that \( T = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} \in \Gamma \); the first condition implies that \( f(T\tau) = f(\tau + 1) = f(\tau) \), so \( f \) is periodic of period \( 1 \). Since \( f \) is holomorphic with period \( 1\), it has a Fourier series of the form \( \sum_{n=0}^{\infty} c_nq^n \) where we write \( q := e^{2\pi i \tau} \). 

(In fact, to check 1., it suffices to check it for \( T \) and \( S := \begin{pmatrix} 0 & -1 \\ 1 & 0 \end{pmatrix} \).)


For a prototypical example of a modular form: Let \( k > 1 \) be a positive integer. Consider the (normalized) Eisenstein series, a function defined on \( \mathcal{H} \) given by:

\[ E_{2k}(\tau) = \frac{1}{2\zeta(2k)}\sum_{m, n \in \mathbb{Z}^2 \setminus (0,0)} \frac{1}{(m + n\tau)^{2k}} \]

where \( \zeta(k) = \sum_{n=1}^{\infty} n^{-k} \) is the Riemann zeta function.

The Eisenstein series \( E_{2k} \) is a modular form of weight \( 2k \). It has the Fourier series expansion

\[ E_{2k}(\tau) = 1 + C_{2k}\sum_{n=1}^{\infty} \sigma_{2k-1}(n)q^n \]

for some constant \( C_{2k} \) depending on \( k \). Here, \( \sigma_k(n) = \sum_{d \mid n} d^k \) denotes the sum of the \(k \)th powers of the (positive) divisors of \(n \). This tells us that we can view \( E_{2k} \) as a generating function for \( \{\sigma_{2k-1}(n)\}_{n=1}^{\infty} \) (up to a scalar multiple). Some values of \( C_{2k} \) are \( C_4 = 240 \), \( C_6 = -504 \), \( C_8 = 480 \).

By using methods from complex analysis, it can be shown that any modular form must be a linear combination of products of the (normalized) Eisenstein series \( E_4 \) and \( E_6 \). In particular, when \( k = 4 \), we can only have that \( E_8 = c(E_4)^2 \) for some constant \(c \). Comparing constant terms, we conclude that \( c = 1 \). That is, \( E_8 = (E_4)^2 \), and so

\[ 1 + 480 \sum_{n=1}^{\infty} \sigma_7(n)q^n = \left(1 + 240\sum_{k=1}^{\infty} \sigma_3(n)q^n \right)^2. \] 

By matching the coefficients of \( q^n \), we find

\[ \sigma_7(n) = \sigma_3(n) + 120\sum_{m=1}^{n-1}\sigma_3(m)\sigma_3(n-m). \]

Check this for \( n = 2 \) and \(n = 3 \) if you're skeptical! The first time I learned this material, I honestly didn't believe it myself.

This is an astonishing identity that I am not sure can be attained by more elementary approaches. There are many similar divisor-sum formulas that can be obtained by using these techniques, as well as formulas for the number of ways to write an integer as the sum of 2 squares (or 4, or 8 squares), and many, many other cool things. In general, deriving results in number theory by using techniques of analysis is a whole area of study known as analytic number theory, which is my chief area of interest.

Comments

Popular posts from this blog

A silly little derivation of \( \zeta(2) \)

(This is a cleaned-up and somewhat expanded version of this Twitter thread .) What follows is a silly little proof that \[ \zeta(2) = \sum_{n=1}^{\infty} \frac{1}{n^2} = \frac{\pi^2}{6} \] where \( \zeta \) is the Riemann zeta function. Consider the integral \[ I := \int_0^1 \frac{\log(1 - x + x^2)}{x(x - 1)} \, dx. \] We have, by using partial fractions and performing some other algebraic manipulations, \[ \begin{align*} I &=  -\int_0^1 \! \frac{\log(1 - x + x^2)}{x} \, dx - \int_0^1 \! \frac{\log(1 - x + x^2)}{1 - x} \, dx  \\ &= -2\int_0^1 \! \frac{\log(1 - x + x^2)}{x} & (x \mapsto 1 - x ) \\ &= 2\left( \int_0^1 \! \frac{\log(1 + x)}{x} \, dx - \int_0^1 \! \frac{\log(1 + x^3)}{x} \, dx \right) \\ &= \frac{4}{3}\int_0^1 \frac{\log(1 + x)}{x} \, dx & (x \mapsto x^{1/3}). \end{align*} \] To evaluate this integral, we take the Maclaurin series: \[ \int_0^1 \! \frac{\log(1 + x)}{x} \, dx = \int_0^1 \! \sum_{n=1}^{\infty} \frac{(-1)^nx^{n-1}}{n} \, dx \] Since for

On My Favorite Number, 76923 (A Brief Survey of Cyclic Numbers)

(This is a cleaned-up, somewhat revised/expanded version of my Twitter thread here .) Among math enthusiasts, the number \( 142857 \) is pretty cool. Move its leftmost digit to the right, and you get \( 428571 \), which is three times the original: \( 428571 = 142857 \times 3 \). Do this again, and you get \( 285714 \), which is two times the original: \( 285714 = 142857 \times 2 \). We can keep doing this until we return to \( 142857 \), as follows: \[ \begin{align*} 142857 &= 142857 \times 1 & 142857 \times 1 &= \color{red} 142857 \\ 428571 &= 142857 \times 3 & 142857 \times 2 &= 2857\color{red}14 \\ 285714 &= 142857 \times 2 & 142857 \times 3 &= 42857\color{red}1 \\ 857142 &= 142857 \times 6 & 142857 \times 4 &= 57\color{red}1428 \\ 571428 &= 142857 \times 4 & 142857 \times 5 &= 7\color{red}14285 \\ 714285 &= 142857 \times 5 & 142857 \times 6 &= 857\color{red}142 \end{align*} \] Numbers that give you consecutive