Physical versus mathematical approximations

Most physical systems cannot be described exactly. In turn, approximations are ubiquitous in physics. Their basic use lies in simplifying otherwise intractable problems. But approximations also play a key role in advanced topics like the renormalization group (Batterman 2002, sec. 4.2) or in establishing relations between different physical theories (Batterman 2002, sec. 2.3).

While experienced physicists have an excellent intuitive understanding of how to make meaningful approximations, they sometimes struggle to convey that understanding to their students. A key reason for that struggle might be that the differences between asymptotic expansions and Taylor expansions—two of the main methods for approximations—are rarely discussed in detail. Here, we discuss those differences and clarify why they are important in physics. Explicitly, we will show that the quality of Taylor-expansion-based approximations is controlled on the mathematical side, while the quality of asymptotic-expansion-based approximations is controlled on the physical side. Thus, for brevity, we can also speak of mathematical approximations and physical approximations, respectively.

To discuss similarities and differences between mathematical and physical approximations, the blog post is organized as follows. In section 1, which is focused on the mathematical preliminaries, we give a short reminder of Taylor expansions and a simple definition of asymptotic expansions, compare both types of expansions on an abstract level, and illustrate their key difference with a simple mathematical example. In section 2, we discuss the implications for approximations in physics based on two simple but important examples; namely, we consider the approximation of a pendulum by a harmonic oscillator and the approximation of Newtonian gravity by Galilean gravity.

1. Expansions: Taylor versus asymptotic

Before delving into physical applications in section 2, we give a short reminder of Taylor expansions (section 1.1), a simple definition of asymptotic expansions (section 1.2), compare both expansions on an abstract level (section 1.3), and illustrate the difference with a simple mathematical example (section 1.4).

1.1. Taylor expansion

Let us start with a short reminder about the Taylor expansion1. The \(n\)-th order Taylor expansion of a function \(f(x)\) around \(x=0\) is given by
\[f(x) = \sum_{m=0}^{n} \frac{f^{(m)}(0)}{m!}\, x^m + E_n(x)\ ,\tag{1}\]
where \(f^{(m)}(0)\) is the \(m\)-th derivative of \(f(x)\) evaluated at \(x=0\) and \(E_n(x)\) is the remainder or error term. In Lagrange form, the error term becomes
\[E_n(x) = \frac{f^{(n+1)}(\xi)}{(n+1)!}\, x^{(n+1)}\ , \tag{2}\]
which looks almost like the term one would get in order \(n+1\) but \(\xi\) is some number between \(0\) and \(x\). If the error term vanishes when the expansion order goes to infinity, that is if
\[E_n(x)\, \longrightarrow\, 0 \qquad \mathrm{for} \qquad n \rightarrow \infty\ , \tag{3}\]
then we can find the Taylor series \(f(x) = \sum_{m=0}^\infty (f^{(m)}(0)/m!)\, x^m\) by extending the Taylor expansion to infinite order.

1.2. Asymptotic expansion

Other than for Taylor expansions, there seems to be no universally agreed-upon version of asymptotic expansions. Some general definitions for asymptotic expansions can be found in references (de Bruijn 1981; Hinch 1991; Paulsen 2014). Here, inspired by (Cahill 2014, sec. 4.12; Whittaker and Watson 2012, sec. 8.2; Bender and Orszag 1999, pp. 89+90), we use a simpler definition for asymptotic expansions that is particularly useful for the comparison with Taylor expansions: a sum \(s_n(x) = \sum_{m=0}^{n} c_m x^m\) is an asymptotic expansion (to \(n\)-th order2) of a function \(f(x)\) for \(x \rightarrow 0\), if
\[f(x) = \sum_{m=0}^{n} c_m x^m + \Delta_n(x) \tag{4}\]
and the error term, defined by the difference
\[\Delta_n(x) := f(x)\, – s_n(x)\ , \tag{5}\]
goes to \(0\) faster than \(x^n\) or, more formally, if
\[\frac{\Delta_n(x)}{x^n}\, \longrightarrow 0\, \qquad \mathrm{for} \qquad x \rightarrow 0\ .\tag{6}\]
The idea behind this simple definition—but also behind asymptotic expansions in general (de Bruijn 1981)—is that the error term \(\Delta_n(x)\) becomes negligible compared to all non-vanishing terms in \(s_n(x)\), if \(x\) is just small enough. In turn, we can disregard \(\Delta_n(x)\) in equation \((4)\) and approximate \(f(x) \approx s_n(x)\) for small enough \(x\).

1.3. Comparison of both expansions

At first glance, when comparing equations \((1)\) and \((4)\), the Taylor expansion might seem to be a special case of the asymptotic expansion with coefficients \(c_m = f^{(m)}(0)/m!\,\). However, this is not the case. The two types of expansions are significantly different, because of the difference in the behavior of their error terms \(E_n(x)\) and \(\Delta_n(x)\).

While for the asymptotic expansion the error term \(\Delta_n(x)\) becomes negligible for \(x \rightarrow 0\), for the Taylor expansion the error term \(E_n(x)\) becomes (sometimes) negligible for \(n \rightarrow \infty\) (Paulsen 2014, p. 17). This difference has an important consequence for the logic of approximations. Even if both expansions lead to the same approximation by disregarding their respective error term, the logic for how to improve the approximation is quite different: to improve an asymptotic expansion, we want to make \(x\) smaller; to improve a Taylor expansion, we want to take higher orders \(n\) into account.

Despite the difference in error terms, we can use a Taylor series—if it exists—to find an asymptotic expansion. Explicitly, we can split the Taylor series as
\[f(x) = \sum_{m=0}^n \underbrace{\frac{f^{(m)}(0)}{m!}}_{=:\, c_m}\ x^m + \underbrace{\sum_{m=n+1}^\infty \frac{f^{(m)}(0)}{m!} x^m}_{=:\,\Delta_n(x)}\ ,\tag{7}\]
which is an asymptotic expansion with coefficients \(c_m\) and the error term \(\Delta_n(x)\) for which it is straightforward to show that \(\Delta_n(x)/x^n \longrightarrow 0\) for \(x \rightarrow 0\). So, if a Taylor series exists3, we can improve an expansion-based approximation of a function \(f(x)\) in both ways: by making \(x\) smaller and by taking higher orders \(n\) into account. Probably because of this, experienced physicists often simply speak of “Taylor-expanding” a function, even if they mean to make an asymptotic expansion. However, they will usually still subtly indicate which of the two expansion they mean by speaking of an expansion around \(0\) for a Taylor expansion or an expansion for small \(x\) for an asymptotic expansion.

1.4. Simple mathematical example

To get a better understanding of the difference between both expansions, we consider a simple mathematical example and approximate \(\sin x\) twice; once by an asymptotic expansion and once by a Taylor expansion. For the first-order Taylor expansion of \(\sin x\) around \(x=0\), we find
\[\sin x = x + E_1(x) \tag{8}\]
with \(E_1(x) = (- \sin(\xi)/2!)\, x^2\) for some \(\xi\) between \(0\) and \(x\). For the first-order asymptotic expansion of \(\sin x\) for \(x \rightarrow 0\), we find
\[\sin x = x + \Delta_1(x) \tag{9}\]
with \(\Delta_1(x) := \sin (x)\, – x\) for which \(\Delta_1(x)/x \longrightarrow 0\) holds for \(x \rightarrow 0\).

For both expansions, disregarding the error term, we find the approximation
\[\sin x \approx x\ . \tag{10}\]
However, while the resulting approximations are the same, the logic for how to improve them is quite different: the logic of Taylor expansions suggests to include higher-order terms (assuming that \(E_n(x) \longrightarrow 0\) for \(n \rightarrow \infty\)); in contrast, the logic of asymptotic expansions suggests to consider smaller values of \(x\) (because \(\Delta_1(x)/x \longrightarrow 0\) for \(x\rightarrow 0\)); for an illustration, see figure 1. To see how this subtle difference is relevant in physics, we consider two applications in the next section.

Figure 1. To first order, the function \(\sin x\) is approximated by \(x\) in both expansions (Taylor and asymptotic). If we are not satisfied with the accuracy of the approximation at some value \(x_2\), the asymptotic expansion logic suggests to consider a smaller value like \(x_1\), whereas the Taylor-expansion logic suggests to include higher-order terms, which leads to \(\sin x \approx x\, – x^3/3!\) in third order.

2. Application to physics

To see how the subtle difference between asymptotic expansions and Taylor expansions is relevant in physics (section 2.3), we consider two simple but important physical examples: the approximation of a pendulum by a harmonic oscillator (section 2.1) and the approximation of Newtonian gravity by Galilean gravity (section 2.2).

2.1. Pendulum to harmonic oscillator

From Newton’s second law, together with the parallelogram of force, we find the equation of motion for a pendulum,
\[m l \ddot \phi =\, – m g \sin \phi\ , \tag{11}\]
where \(m\) is the bob’s mass, \(l\) is the length of the string, \(g = 9.8\, \mathrm{m}/\mathrm{s}^2\) is the gravitational acceleration constant, and \(\phi\) is the angle describing the pendulum’s deflection away from its equilibrium position; see figure 2. This equation of motion is a nonlinear differential equation and, thus, hard to solve4. However, because the nonlinearity is only in \(\sin \phi\), we can significantly simplify the equation of motion by linearizing it; that is, by approximating \(\sin \phi \approx \phi\), as described above. As a result, we obtain the harmonic oscillator equation of motion,
\[\ddot \phi =\, – \omega_0^2 \phi\ , \tag{12}\]
where we introduced \(\omega_0 = \sqrt{g/l}\). This linearized equation of motion can now be straightforwardly solved and we find \(\phi(t) = \phi_0 \cos \omega_0 t\), where we assumed that initially, at time \(t=0\), the pendulum starts at an angle \(\phi(0) = \phi_0\) with no velocity \(\dot \phi(0) = 0\). So, from the linearized equation of motion, we would predict, for example, that the pendulum will oscillate back and forth with the frequency \(\omega_0\), which is independent of its initial angle \(\phi_0\).

Figure 2. The gravitational force \(\mathbf F_g\) pulls the bob downwards but only its tangential component \(\mathbf F_t\) contributes to the bob’s motion; it’s radial component \(\mathbf F_r\) is compensated by the tension in the string. With \(F_g = m g\) and trigonometry, we find \(F_t =\, – mg \sin \phi\).

If the prediction based on the approximated equation of motion agrees sufficiently well with experimental data, then all is fine. However, if we are not satisfied with the accuracy of our prediction, we need to reconsider our approximation. In this case, an asymptotic-expansion-based approximation suggests to consider smaller values of \(\phi\), while a Taylor-expansion-based approximation suggests to consider higher-order terms in the expansion of \(\sin \phi\).

2.2. Newtonian to Galilean gravity

In the previous subsection, when deriving the pendulum’s equation of motion, we simply assumed that the gravitational force is directed downwards with strength \(F_g = m g\), which is sometimes referred to as Galilean gravitational force, as it can be used to rederive Galilei’s law of free fall. However, according to Newton’s law of gravity5, the strength of the gravitational force \(F_G\) acting on a bob with mass \(m\) in the gravitational field of the earth with mass \(M\) is given by
\[F_G = G \frac{m M}{r^2}\ , \tag{13}\]
where \(G\) is Newton’s gravitational constant and \(r\) is the distance between the bob’s and earth’s centers of mass. So, it seems that—strictly speaking—our description of the pendulum in section 2.1 is not correct, as we are using the wrong gravitational force. Yet, that is not a real problem.

We can show that, close to the surface of the earth, the Galilean gravitational force holds as an approximation to the Newtonian gravitational force. To do so, we first note that the distance \(r\) can be rewritten as \(r=R+h\), where \(R\) is the radius of the earth and \(h\) is the hight of the bob over the surface of the earth. Being close to the surface of the earth means that the height \(h\) will be much smaller than the radius of the earth \(R\) or, correspondingly, that the relative height \(h/R\) is small. More formally, in terms of asymptotic expansions, this means that we are interested in the limit \(h/R \rightarrow 0\). In terms of Taylor expansions, this means that we are interested in relative heights around \(h/R = 0\). In both cases, to \(0\)-th order in \(h/R\), we find the approximation
\[F_G = G \frac{m M}{R^2} \frac{1}{(1 +\frac{h}{R})^2} \approx G \frac{m M}{R^2}\ . \tag{14}\]
To compare the approximation of Newtonian gravity \(F_G\) to Galilean gravity \(F_g\), we identify \(g = G M/R^2\) and, in turn, find
\[F_G \approx m g = F_g\ . \tag{15}\]
Of course, we still need to check whether or not the constants fit together. But indeed, for an approximate earth mass \(M = 6.0 \times 10^{24}\, \mathrm{kg}\), gravitational constant \(G = 6.7 \times 10^{-11}\, \mathrm{m^3}/\mathrm{kg}\, \mathrm{s}^2\), and earth radius \(R = 6.4 \times 10^6\, \mathrm{m}\) (Kuchling 2022, p. 145), we find the typical value for the gravitational acceleration constant \(g = 9.8\, \mathrm{m}/\mathrm{s}^2\). So, we conclude that Galilean gravity indeed holds as an approximation to Newtonian gravity close to the surface of the earth6.

If we are not satisfied with the approximation of Newtonian gravity by Galilean gravity, then an asymptotic-expansion-based approximation suggests to consider smaller relative heights \(h/R\), while a Taylor-expansion-based approximation suggests to consider terms of higher-order in relative heights \(h/R\). However, including higher-order terms would not allow us to derive Galilean gravity from Newtonian gravity. This underlines the importance of asymptotic expansions in establishing a relation between different models and theories.

2.3. Generalization

From a general perspective, how we can improve an approximation depends on the logic behind it. In the examples above, the Taylor-expansion logic suggests to include higher nonlinear orders of \(\sin \phi\) or \((1+h/R)^{-2}\). In contrast, the asymptotic-expansion logic suggests to consider smaller values of the angle \(\phi\) or the relative height \(h/R\).

Because of the different ways to improve approximations, we can refer to them simply as mathematical and physical approximations or, more precisely, as mathematically-controlled and physically-controlled approximations:

  • to improve a Taylor-expansion-based approximation, we are supposed to include higher-order terms, which means to consider a different mathematical problem;
  • in contrast, to improve an asymptotic-expansion-based approximation, we are supposed to consider smaller values of the expansion variable, which means to consider a different physical problem.

If we had the expansion variable under full experimental control, we could guarantee to enter a regime, where the physical approximation holds to any satisfying degree of accuracy7. Unfortunately, we rarely have full experimental control over the expansion variable. Instead, we can usually control it only within a certain parameter regime and to a certain accuracy.

Conclusion

In this blog post, we focused on the difference between Taylor expansions and asymptotic expansions and discussed the implication for physics. While the resulting approximations may appear identical, the logic for how to improve them is quite different: for the Taylor expansion, we have to look at a different mathematical problem (higher-orders); for an asymptotic expansion, we have to look at a different physical problem (smaller expansion variable). If we had the expansion variable under full experimental control, we could guarantee to enter a regime, where the physical approximation is valid to any satisfying degree of accuracy. However, our experimental control is almost always limited to some parameter regime and by some accuracy. As will be discussed in a future blog post, this limited experimental control makes it important to theoretically estimate in which parameter regime one can expect the physical approximation to hold.

Bibliography

Batterman, Robert W. The Devil in the Details: Asymptotic Reasoning in Explanation, Reduction, and Emergence. New York: Oxford University Press, 2002.

Bender, Carl M., and Steven A. Orszag. Advanced Mathematical Methods for Scientists and Engineers: Asymptotic Methods and Perturbation Theory. 1978. Reprint, New York: Springer-Verlag, 1999.

Cahill, Kevin. Physical Mathematics. 2013. Reprint with corrections, Cambridge: Cambridge University Press, 2014.

de Bruijn, N. G. Asymptotic Methods in Analysis. 1958. Reprint with corrections of the 3rd ed. (1970), New York: Dover Publications, 1981.

Greenberg, Michael D. Foundations of Applied Mathematics. 1978. Reprint, Mineola, NY: Dover Publications, 2013.

Hinch, E. J. Perturbation Methods. 1991. Reprint, Cambridge: Cambridge University Press, 1995.

Iro, Harald. A Modern Approach to Classical Mechanics. [2002?] 2nd ed. Singapore: World Scientific Publishing, 2016.

Kuchling, Horst. Taschenbuch der Physik. 22nd ed. Reviser Thomas Kuchling. Munich: Carl Hanser Verlag, 2022.

Longair, Malcolm. Theoretical Concepts in Physics: An Alternative View of Theoretical Reasoning in Physics. 1984. 3rd ed. Cambridge: Cambridge University Press, 2020.

Negele, John W., and Henri Orland. Quantum Many-Particle Systems. Redwood City, CA: Addison-Wesley Publishing, 1988.

Paulsen, William. Asymptotic Analysis and Perturbation Theory. Boca Raton: Taylor & Francis Group, 2014.

Whittaker, E. T., and G. N. Watson. A Course of Modern Analysis: An Introduction to the General Theory of Infinite Processes and of Analytic Functions; With an Account of the Principal Transcendental Functions. 1902. Reprint of the 2nd ed. (1915), n.p.: Watchmaker Publishing, 2012.

Footnotes

  1. Derivations and more detailed discussions about Taylor expansions can be found in many textbooks on mathematics; for example, see (Greenberg 2013, sec. 2.4) for a particularly clear presentation. ↩︎
  2. For more general definitions of asymptotic expansions, the concept of “order” is not necessarily applicable. ↩︎
  3. If no Taylor series exists, the situation is more complicated; for an instructive example, see (Negele and Orland 1988, pp. 53-57). ↩︎
  4. Even though nonlinear differential equations are generally hard to solve, the solution of equation (11) is known; for example, see (Iro 2016, pp. 43-45). ↩︎
  5. Newton’s law of gravity is discussed in many textbooks of classical mechanics. For a particularly insightful presentation, see (Longair 2020, ch. 4). ↩︎
  6. Actually, this result should not be too surprising. If Newtonian gravity could not reproduce the empirically found gravitational acceleration constant, then something would probably be wrong with Newtonian gravity. ↩︎
  7. Note that this ‘guarantee’ assumes the model or theory (for example, Newtonian gravity) underlying the approximation (for example, Galilean gravity) is completely correct. Because we have no model or theory that is completely correct, predictions of approximated models and theories might still disagree with experimental findings, even if we are in a parameter regime for which the approximation should hold. However, that disagreement would be a problem of the underlying theory; not a problem of the approximation. ↩︎