Elliptic integrals, the arithmetic-geometric mean and the Brent-Salamin algorithm for π - PDF

Description
Elliptic integrals, the arithmetic-geometric mean and the Brent-Salamin algorithm for π Notes by G.J.O. Jameson Contents. The integrals I(a, b) and K(b). The arithmetic-geometric mean 3. The evaluation

Please download to get full document.

View again

of 24
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Information
Category:

Legal forms

Publish on:

Views: 238 | Pages: 24

Extension: PDF | Download: 0

Share
Transcript
Elliptic integrals, the arithmetic-geometric mean and the Brent-Salamin algorithm for π Notes by G.J.O. Jameson Contents. The integrals I(a, b) and K(b). The arithmetic-geometric mean 3. The evaluation of I(a, b): Gauss s theorem 4. Estimates for I(a, ) for large a and I(, b) for small b 5. The integrals J(a, b) and E(b) 6. The integrals L(a, b) 7. Further evaluations in terms of the agm; the Brent-Salamin algorithm 8. Further results on K(b) and E(b).. The integrals I(a, b) and K(b) For a, b , define I(a, b) = An important equivalent form is: π/ (a cos θ + b sin dθ. () θ) /.. We have I(a, b) = dx. () (x + a ) / (x + b ) / Proof. With the substitution x = b tan θ, the stated integral becomes π/ b sec θ (a + b tan θ) / b sec θ dθ = = π/ π/ (a + b tan θ) / cos θ dθ (a cos θ + b sin dθ. θ) / We list a number of facts that follow immediately from () or (): (I) I(a, a) = π a ; (I) (I3) I(b, a) = I(a, b); I(ca, cb) = I(a, b), hence ai(a, b) = I(, b/a); c (I4) I(a, b) decreases with a and with b; (I5) if a b, then π a I(a, b) π b. Note that the substitution x = /y in () gives I(a, b) = dx. (3) ( + a x ) / ( + b x ) /.. We have I(a, b) π (ab) /. Proof. This follows from the integral Cauchy-Schwarz inequality, applied to (). Note further that, by the inequality of the means, π (ab) π ( / 4 a + ). b This upper bound for I(a, b) is also obtained directly by applying the inequality of the means to the integrand in (). Clearly, I(a, ) is undefined, and one would expect that I(a, b) as b +. This is, indeed, true. In fact, the behaviour of I(, b) for b close to is quite interesting; we return to it in section 4. The complete elliptic integral of the first kind is defined by K(b) = π/ ( b sin dθ. (4) θ) / For b, the conjugate b is defined by b = ( b ) /, so that b + b =. Then cos θ + b sin θ = b sin θ, so that I(, b) = K(b ). (An incomplete elliptic integral is one in which the integral is over a general interval [, α]. We do not consider such integrals in these notes.) Clearly, K() = π.3. We have and K(b) increases with b. Another equivalent form is: K(b) = dt. (5) ( t ) / ( b t ) / Proof. Denote this integral by I. Substituting t = sin θ, we have I = π/ cos θ( b sin cos θ dθ = K(b). θ) /.4 PROPOSITION. For b , π K(b) = d n b n = + 4 b b4 +, (6) n= where d = and d n =.3... (n ).4... (n). where Proof. By the binomial series, ( b sin θ) / = + a n b n sin n θ, n= ( ) a n = ( ) n = n n! ( )( 3) (n (n ) ) = (n) For fixed b , the series is uniformly convergent for θ π. Now π/ sin n θdθ = πa n: identity (6) follows. In a sense, this series can be regarded as an evaluation of I(a, b), but a more effective one will be described in section 3..5 COROLLARY. For b , + 4 b π K(b) + b 4( b ). Proof. Just note that + 4 b n= d nb n + 4 (b + b ). Hence π K(b) + b for b. These are convenient inequalities for K(b), but when translated into an inequality for I(, b) = K(b ), the right-hand inequality says I(, b) + ( π b )/(4b ) = /(4b ), and it is easily checked that this is weaker than the elementary estimate I(, b) + /b given above. 4 π The special case I(, ) The integral I(, ) equates to a beta integral, and consequently to numerous other integrals. We mention some of them, assuming familiarity with the beta and gamma functions..6 PROPOSITION. We have I(, ) = = dx ( x 4 ) / (7) dx (x 4 ) / (8) = 4 B( 4, ) (9) = Γ( 4 ) 4 π. () 3 Proof. For (7), substitute x = sin θ, so that x 4 = cos θ( + sin θ). Then π/ dx = ( x 4 ) / Substituting x = /y, we also have dx = ( x 4 ) / π/ ( + sin dθ = θ) / Substituting x = t /4 in (7), we obtain I(, ) = (cos θ + sin θ) / dθ = I(, ). ( y 4 ) / y dy = dy. (y 4 ) / ( t) / ( 4 t 3/4 ) dt = 4 B( 4, ). Now using the identities B(a, b) = Γ(a)Γ(b)/Γ(a + b) and Γ(a)Γ( a) = π/(sin aπ), we have Γ( ) = π and Γ( 4 )Γ( 3 4 ) = π/(sin π 4 ) = π, hence B( 4, ) = Γ( 4 )Γ( ) Γ( 3 4 ) = πγ( 4 ) π = Γ( 4 ) π..7. We have π/ (sin θ) dθ = I(, ), () / ( x 4 ) / dx = ( x ) /4 dx = 3 I(, ). () Proof. For (), substitute x = (sin θ) / in (7). Denote the two integrals in () by I, I. The substitutions x = t /4 and x = t / give I = B(, 3) and I 4 4 = B(, 5 ). The 4 identity (a + b)b(a, b + ) = bb(a, b) now gives I = I = B(, ) The arithmetic-geometric mean Given positive numbers a, b, the arithmetic mean is a = (a + b) and the geometric mean is b = (ab) /. If a = b, then a = b = a. If a b, then a b, since 4(a b ) = (a + b) 4ab = (a b) . With a b, consider the iteration given by a = a, b = b and a n+ = (a n + b n ), b n+ = (a n b n ) /. At each stage, the two new numbers are the arithmetic and geometric means of the previous two. The following facts are obvious: 4 (M) a n b n ; (M) a n+ a n and b n+ b n ; (M3) a n+ b n+ a n+ b n = (a n b n ), hence a n b n n (a b). It follows that (a n ) and (b n ) converge to a common limit, which is called the arithmeticgeometric mean of a and b. We will denote it by M(a, b). Convergence is even more rapid than the estimate implied by (M3), as we will show below. First, we illustrate the procedure by the calculation of M(, ) and M(, ): n b n a n n b n a n Of course, a n and b n never actually become equal, but more decimal places would be required to show the difference at the last stage shown. In fact, in the first example, a 3 b 3 4, as we will see shortly. Clearly, M(a, b) = M(a n, b n ) and b n M(a, b) a n for all n. Also, M(ca, cb) = cm(a, b) for c , so M(a, b) = am(, b/a) = bm(a/b, ). For further analysis of the process, define c n by a n b n = c n... With c n defined in this way, we have (M4) c n+ = (a n b n ), (M5) a n = a n+ + c n+, b n = a n+ c n+, (M6) c n = 4a n+ c n+, (M7) c n+ c n 4b. Proof. For (M4), we have c n+ = a n+ b n+ = 4 (a n + b n ) a n b n = 4 (a n b n ). (M5) follows, since a n+ = (a n + b n ), and (M6) is given by c n = (a n + b n )(a n b n ) = (a n+ )(c n+ ). Finally, (M7) follows from (M6), since a n+ b. (M5) shows how to derive a n and b n from a n+ and b n+, reversing the agm iteration. 5 (M7) shows that c n (hence also a n b n ) converges to quadratically, hence very rapidly. We can derive an explicit bound for c n as follows:.. Let R and let k be such that c k 4b/R. Then c n+k 4b/R n for all n. The same holds with b replaced by b k. Proof. The hypothesis equates to the statement for n =. Assuming it now for a general n, we have by (M7) c n+k+ (4b) = 4b. 4b R n+ R n+ In the case M(, ), we have c = b =. Taking R = 4, we deduce c n 4/4 n for all n. For a slightly stronger variant, note that c = a a , hence c n+ 4/(4 n ) = 4/( n+ ) for n, or c n 4/( n ) for n. In particular, c . We finish this section with a variant of the agm iteration in the form of a single sequence instead of a pair of sequences..3. Let b . Define a sequence (k n ) by: k = b and Then k n+ = k/ n. + k n M(, b) = n= ( + k n). Proof. Let (a n ), (b n ) be the sequences generated by the iteration for M(, b) (with a =, b = b, whether or not b ), and let k n = b n /a n. Then k = b and k n+ = b n+ = (a nb n ) / = k/ n. a n+ a n + b n + k n So the sequence (k n ) defined by this iteration is (b n /a n ). Also, so n r= ( + k n) = a n + b n = a n+, a n a n ( + k r) = a n M(, b) as n. Quadratic convergence of (k n ) to is easily established directly: k n+ = k/ n + k n kn / + k n = ( kn / ) ( k n ). + k n 6 Another way to present this iteration is as follows. Assume now that b . Recall that for k, the conjugate k is defined by k + k =. Clearly, if t = k / /( + k) (where k ), then t = ( k)/( + k). So we have kn+ = ( k n )/( + k n ), hence + kn+ = /( + k n ), so that M(, b) =. + kn n= 3. The evaluation of I(a, b): Gauss s theorem We now come to the basic theorem relating I(a, b) and M(a, b). It was established by Gauss in 799. There are two distinct steps, which we present as separate theorems. In the notation of section, we show first that I(a, b) = I(a, b ). It is then a straightforward deduction that I(a, b) = π/[m(a, b)]. Early proofs of the first step, including that of Gauss, were far from simple. Here we present a proof due to Newman [New] based on an ingenious substitution. 3. THEOREM. Let a b , and let a = (a + b), b = (ab) /. Then I(a, b) = I(a, b ). () is even: Substitute Proof. We write I(a, b ) as an integral on (, ), using the fact that the integrand I(a, b ) = dx. (x + a ) / (x + b / ) x = t ab t. Then x as t and x as t +. Also, dx dt = ( + ab/t ) and 4(x + b ) = 4x + 4ab = t + ab + a b = t ( t + ab t ), hence Further, (x + b ) / = ( t + ab ). t 4(x + a ) = 4x + (a + b) = t + a + b + a b t = t (t + a )(t + b ). 7 Hence I(a, b ) = ( t (t + a ) / (t + b ) / ab (t + ) + ab ) t t = dt (t + a ) / (t + b ) / = I(a, b). dt Lord [Lo] gives an interesting alternative proof based on the area in polar coordinates. 3. THEOREM. We have I(a, b) = π M(a, b) () Proof. Let (a n ), (b n ) be the sequences generated by the iteration for M(a, b). By Theorem 3., I(a, b) = I(a n, b n ) for all n. Hence π a n I(a, b) for all n. Since both a n and b n converge to M(a, b), the statement follows. We describe some immediate consequences; more will follow later. π b n 3.3. We have π a + b I(a, b) π. (3) (ab) / Proof. This follows from Theorem 3., since π/(a ) I(a, b ) π/(b ). The right-hand inequality in (3) repeats., but I do not know an elementary proof of the left-hand inequality. From the expressions equivalent to I(, ) given in.6, we have: 3.4. Let M = M(, ) (.984). Then ( x 4 ) / dx = I(, ) = π M (.39), B( 4, ) = π M ( 5.446), Γ( 4 ) = (π)3/4 M / ( 3.656) Example. From the value M(, ) 6.67 found in section, we have I(, ) 8 Finally, we present several ways in which Theorem 3. can be rewritten in terms of K(b). Recall that I(, b) = K(b ), where b + b =, and that if k = ( b)/( + b), then k = k / /( + b) For b , K(b) = I( + b, b). (4) Proof. By Theorem 3., I( + b, b) = I[, ( b ) / ] = I(, b ) = K(b) For b , K(b) = + b K K(b ) = + b K ( b / + b ( b + b ), (5) ). (6) Proof. By (4) and the property I(ca, cb) = I(a, b), c K(b) = I( + b, b) = ( + b I, b ) = + b + b K Also, by Theorem 3., K(b ) = I(, b) = I ( ) + b, b/ = + b I ( b / ) (, b/ = + b + b K + b ). ( ) b. + b Conversely, any of (4), (5), (6) implies () (to show this for (4), start by choosing c and k so that ca = + k and cb = k). For another proof of Theorem 3., based on this observation and the series expansion of ( + ke iθ ) / )( + ke iθ ) /, see [Bor, p. ]. 4. Estimates for I(a, ) for large a and I(, b) for small b In this section, we will show that I(, b) is closely approximated by log(4/b) for b close to. Equivalently, ai(a, ) is approximated by log 4a for large a, and consequently M(a, ) is approximated by F (a) =: π a log 4a. By way of illustration, F () = 6.7 to four d.p., while M(, ) This result has been known for a long time, with varying estimates of the accuracy of the approximation. For example, a version appears in [WW, p. 5], published in 97. 9 A highly effective method was introduced by Newman in [New], but only sketched rather briefly, with the statement given in the form ai(a, ) = log 4a+O(/a ). Following [Jam], we develop Newman s method in more detail to establish a more precise estimation. The method requires nothing more than some elementary integrals and the binomial and logarithmic series, and a weaker form of the result is obtained with very little effort. The starting point is: 4.. We have (ab) / dx = (x + a ) / (x + b ) / dx. (ab) (x / + a ) / (x + b ) / Proof. With the substitution x = ab/y, the left-hand integral becomes ab (ab) / ( a b + a y ) / ( a b + b y ) / y dy = dy. (ab) (b / + y ) / (a + y ) / where Hence b / I(, b) = H(x) dx, () H(x) =. () (x + ) / (x + b ) / Now ( + y) / y for y , as is easily seen by multiplying out ( + y)( y), so for x , ( x ) (x + b ) H(x) . (3) / (x + b ) / Now observe that the substitution x = by gives b / /b / dx = (x + b ) / (y + ) dy = / sinh b. / 4. LEMMA. We have log b / sinh log b/ b + b. (4) / 4 Proof. Recall that sinh x = log[x + (x + ) / ], which is clearly greater than log x. Also, since ( + b) / + b, ( ) / b + / b + = b [ + ( + / b)/ ] b ( + b) = / b ( + b). / 4 The right-hand inequality in (4) now follows from the fact that log( + x) x. where This is already enough to prove the following weak version of our result: 4.3 PROPOSITION. For b and a, log 4 b b I(, b) log 4 b + b, (5) log 4a a ai(a, ) log 4a + a. (6) Proof. The two statements are equivalent, because ai(a, ) = I(, ). By () and (3), a Since (x + b ) / x, we have sinh b / R (b) I(, b) sinh b /, R (b) R (b) = b / b / x dx. (7) (x + b ) / x b / x dx = x dx = b. Both inequalities in (5) now follow from (4), since log b / = log 4 b. This result is sufficient to show that M(a, ) F (a) as a. To see this, write ai(a, ) = log 4a + r(a). Then I(a, ) which tends to as a, since ar(a). a log 4a = ar(a) log 4a[log 4a + r(a)], Some readers may be content with this version, but for those with the appetite for it, we now refine the method to establish closer estimates. All we need to do is improve our estimations by the insertion of further terms. ( + y) / . Instead, we now use the stronger bound ( + y) / y y The upper estimate in (3) only used for y . These are the first three terms of the binomial expansion, and the stated inequality holds because the terms of the expansion alternate in sign and decrease in magnitude. So we have instead of (3): ( x ) (x + b ) H(x) ( / x x4 ). (8) (x + b ) / We also need a further degree of accuracy in the estimates for sinh b / and R (b). 4.4 LEMMA. For b, we have log b + b 3 / 4 3 b sinh log b/ b + b / 4 Proof. Again by the binomial series, we have So, as in Lemma 4., 6 b + 3 b3. (9) + b 8 b ( + b) / + b 8 b + 6 b3. () ( ) / b + / b + b ( + b / 8 b + 6 b3 ) = b ( + b / 4 6 b + 3 b3 ). The right-hand inequality in (9) now follows from log( + x) x. Also, ( ) / b + / b + b ( + b / 8 b ) = ( + B), b/ where B = 4 b 6 b (so B 4 b). By the log series, log( + x) x x for x , so log( + B) B B 4 b 6 b 3 b = 4 b 3 3 b. 4.5 LEMMA. For b, we have: R (b) b + 4 b b log, () b/ R (b) b + 4 b b log b / 3 6 b3, () R (b) b 5 b. (3) Proof: We can evaluate R (b) explicitly by the substitution x = b sinh t: R (b) = c(b) where c(b) = sinh (/b / ). Now c(b) b sinh t dt = b (cosh t ) dt = 4 b sinh c(b) b c(b), sinh c(b) = sinh c(b) cosh c(b) = ( ) / b / b + = b ( + b)/, so R (b) = b( + b)/ b c(b). Statement () now follows from ( + b) / + b and c(b) log. Statement () b / follows from the left-hand inequality in () and the right-hand one in (4). Unfortunately, (3) does not quite follow from (). We prove it directly from the integral, as follows. Write x /(x + b ) / = g(x). Since g(x) x, we have b / g(x) dx b (b b ). For x b, we have by the binomial expansion again ) x x g(x) = ( x b( + x /b ) / b b + 3x4, 8b 4 and hence b g(x) dx ( )b 3 b. Together, these estimates give R (b) b 5 b. We can now state the full version of our Theorem. 4.6 THEOREM. For b and a, we have log 4 b I(, b) ( + 4 b ) log 4 b, (4) ( log 4a ai(a, ) + ) log 4a. (5) 4a Moreover, the following lower bounds also apply: Hence I(, b) ( + 4 b ) log 4 b 7 6 b, (6) ( ai(a, ) + ) log 4a 7 4a 6a. (7) I(, b) = ( + 4 b ) log 4 b + O(b ) for b , ( ai(a, ) = + ) log 4a + O(/a ) for a . 4a Note. As this shows, the estimation log 4a + O(/a ) in [New] is not quite correct. Proof: lower bounds. By (8), (9) and (3), I(, b) sinh b / R (b) log 4 b + b 3 6 b ( b 5 b ) log 4 b, (with a spare term 8 b ). Of course, the key fact is the cancellation of the term b. Also, using () instead of (3), we have I(, b) log 4 b + b 3 6 b ( b + 4 b 4 b log 4 b ) = ( + 4 b ) log 4 b 7 6 b. 3 Upper bound: By (8), I(, b) sinh b / R (b) R (b), where So by (9) and (), R (b) = b / x 4 b / dx x 3 dx = (x + b ) / 4 b. I(, b) log 4 b + b 8 b + 6 b3 ( b + 4 b 4 b log 4 b 3 6 b3 ) b = ( + 4 b ) log 4 b 3 6 b + 4 b3. If b 3 4, then 3 6 b 4 b3, so I(, b) ( + 4 b ) log 4 b. For b 3 4, we reason as follows. By., I(, b) π 4 (+ b ). Write h(b) = (+ 4 b ) log 4 b π ( + ). We find that h( 3 ) . One verifies by differentiation that h(b) 4 b 4 is increasing for b (we omit the details), hence h(b) for 3 b. 4 By Gauss s theorem and the inequality /( + x) x, applied with x = /(4a ), statement (5) translates into the following pair of inequalities for M(a, ): 4.7 COROLLARY. Let F (a) = (πa)/( log 4a). Then for a, ( ) F (a) M(a, ) F (a). (8) 4a Since M(a, b) = b M(a/b, ) we can derive the following bounds for M(a, b) (where a b) in general: π ) a ( b M(a, b) π log(4a/b) 4a a log(4a/b). (9) Note: We mention very briefly an older, perhaps better known, approach (cf. [Bow, p. ]). Use the identity I(, b) = K(b ), and note that b when b. Define A(b ) = π/ b sin θ ( b sin dθ θ) / and B(b ) = K(b ) A(b ). One shows by direct integration that A(b ) = log[(+b )/b] and B() = log, so that when b, we have A(b ) log b and hence K(b ) log 4 b. This method can be developed to give an estimation with error term O(b log ), [Bor, p. b ], but the details are distinctly more laborious, and (even with some refinement) the end result does not achieve the accuracy of our Theorem By a different method, we now establish exact upper and lower bounds for I(, b) log(/b) on (, ]. The lower bound amounts to a second proof that I(, b) log(4/b) The expression I(, b) log(/b) increases with b for b. Hence log 4 I(, b) log b π for b ; () π log 4 ai(a, ) log a π for a ; () a log a + π M(a, ) π a for a. () log a + log 4 Proof. We show that K(b) log(/b ) decreases with b, so that I(, b) log(/b) = K(b ) log(/b) increases with b. Once this is proved, the inequalities follow, since I(, b) log(/b) takes the value π/ at b = and (by 4.3) tends to log 4 as b +. By.4, for b , we have K(b) = π n= d nb n, where d = and d n =.3... (n ).4... (n). Hence K (b) = π n= nd nb n. Meanwhile, log(/b ) = log( b ), so d db log b = b b = b n. By well-known inequalities for the Wallis product, nπd n , so K (b) d db log(/b ). n= Application to the calculation of logarithms and π. Theorem 4.6 has pleasing applications to the calculation of logarithms (assuming π known) and π (assuming logarithms known), exploiting the rapid convergence of the agm iteration. Consider first the problem of calculating log x (where x ). Choose n, in a way to be discussed below, and let 4a = x n, so that n log x = log 4a, which is approximated by ai(a, ). We calculate M = M(a, ), and hence I(a, ) = π/(m). Then log x is approximated by (a/n)i(a, ). How accurate is this approximation? By Theorem 4.6, log 4a = ai(a, ) r(a), where r(a) 4a log 4a. Hence in which log x = a r(a) I(a, ) n n, r(a) n log x = 4 log x 4a x. n We choose n so that this is within the desired degree of accuracy. It might seem anomalous that log x, which we are trying to calculate, appears in this error estimation, but a rough estimate for it is always readily available. 5 For illustration, we apply this to the calculation of log, taking n =, so that a = = 4. We find that M = M(4, ) , so our approximation is π 4 M , while in fact log = to seven d.p.. The discussion above (just taking log ) shows that the approximation overestimates log, with error no more than 4/ 4 = /. We now turn to the question of approximating π. The simple-minded method is as follows. Choose a suitably large a. By Theorems 3. and 4.6, where π = M(a, )I(a, ) = M(a, )log 4a a + s(a), s(a) = M(a, ) r(a) 4a M(a, )log a 4a. 3 So an approximation to π is M(a, ) log 4a, with error no more than s(a). Using our a original example with a =, this gives to five d.p., with the error estimated as no more than 8 5 (the actual error is about ). For a better approximation, one would repeat with a larger a. However, there is a way to generate a sequence of increasingly close approximations from a single agm iteration, which we now describe (cf. [Bor, Proposition 3]). In t
Related Search
Similar documents
View more...
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks