# DIFFERENTIATING UNDER THE INTEGRAL SIGN KEITH CONRAD I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr PDF document - DocSlides

2014-12-13 265K 265 0 0

##### Description

Bader had given me It showed how to di64256erentiate parameters under the integral sign its a certain operation It turns out thats not taught very much in the universities they dont emphasize it But I caught on how to use that method and I used tha ID: 23299

**Embed code:**

## Download this pdf

DownloadNote - The PPT/PDF document "DIFFERENTIATING UNDER THE INTEGRAL SIGN ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

## Presentations text content in DIFFERENTIATING UNDER THE INTEGRAL SIGN KEITH CONRAD I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr

Page 1

DIFFERENTIATING UNDER THE INTEGRAL SIGN KEITH CONRAD I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to diﬀerentiate parameters under the integral sign – it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try diﬀerentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was diﬀerent from everybody else’s, and they had tried all their tools on it before giving the problem to me. Richard Feynman [2, pp. 71–72] 1. Introduction The method of diﬀerentiation under the integral sign, due originally to Leibniz, concerns integrals depending on a parameter, such as tx . Here is the extra parameter. (Since is the variable of integration, is not a parameter.) In general, we might write such an integral as (1.1) x,t ) d x, where x,t ) is a function of two variables like x,t ) = tx Example 1.1. Let x,t ) = (2 . Then x,t ) d (2 x. An anti-derivative of (2 with respect to is (2 , so (2 (2 =1 =0 (2 + + 2 This answer is a function of , which makes sense since the integrand depends on . We integrate over and are left with something that depends only on , not An integral like x,t ) d is a function of , so we can ask about its -derivative, assuming that x,t ) is nicely behaved. The rule is: the -derivative of the integral of x,t ) is the integral of the -derivative of x,t ): (1.2) x,t ) d ∂t x,t ) d x.

Page 2

2 KEITH CONRAD This is called diﬀerentiation under the integral sign. If you are used to thinking mostly about functions with one variable, not two, keep in mind that (1.2) involves integrals and derivatives with respect to separate variables: integration with respect to and diﬀerentiation with respect to Example 1.2. We saw in Example 1.1 that (2 = 4 3 + 2 , whose -derivative is + 6 . According to (1.2), we can also compute the -derivative of the integral like this: (2 ∂t (2 2(2 )(3 ) d (12 + 6 ) d = 6 + 6 =1 =0 = 6 + 6 The answers agree. 2. Euler’s factorial integral in a new light For integers 0, Euler’s integral formula for ! is (2.1) which can be obtained by repeated integration by parts starting from the formula (2.2) = 1 when = 0. Now we are going to derive Euler’s formula in another way, by repeated diﬀerentiation after introducing a parameter into (2.2). For any t> 0, let tu . Then d and (2.2) becomes te tu = 1 Dividing by and writing as (why is this not a problem?), we get (2.3) tx This is a parametric form of (2.2), where both sides are now functions of . We need t> 0 in order that tx is integrable over the region 0. Now we bring in diﬀerentiation under the integral sign. Diﬀerentiate both sides of (2.3) with respect to , using (1.2) to treat the left side. We obtain xe tx so (2.4) xe tx

Page 3

DIFFERENTIATING UNDER THE INTEGRAL SIGN 3 Diﬀerentiate both sides of (2.4) with respect to , again using (1.2) to handle the left side. We get tx Taking out the sign on both sides, (2.5) tx If we continue to diﬀerentiate each new equation with respect to a few more times, we obtain tx tx 24 and tx 120 Do you see the pattern? It is (2.6) tx +1 We have used the presence of the extra variable to get these equations by repeatedly applying . Now specialize to 1 in (2.6). We obtain which is our old friend (2.1). Voil´a! The idea that made this work is introducing a parameter , using calculus on , and then setting to a particular value so it disappears from the ﬁnal formula. In other words, sometimes to solve a problem it is useful to solve a more general problem . Compare (2.1) to (2.6). 3. A damped sine integral We are going to use diﬀerentiation under the integral sign to prove tx sin arctan for t> 0. Call this integral ) and set x,t ) = tx (sin /x , so ( ∂/∂t x,t ) = tx sin . Then ) = tx (sin ) d x. The integrand tx sin , as a function of , can be integrated by parts: ax sin sin cos 1 + ax Applying this with and turning the indeﬁnite integral into a deﬁnite integral, ) = tx (sin ) d sin + cos 1 + tx =0

Page 4

4 KEITH CONRAD As sin + cos oscillates a lot, but in a bounded way (since sin and cos are bounded functions), while the term tx decays exponentially to 0 since t> 0. So the value at is 0. Therefore ) = tx (sin ) d 1 + We know an explicit antiderivative of 1 (1 + ), namely arctan . Since ) has the same -derivative as arctan , they diﬀer by a constant: for some number (3.1) tx sin arctan for t> We’ve computed the integral, up to an additive constant, without ﬁnding an antiderivative of tx (sin /x To compute in (3.1), let on both sides. Since (sin /x | 1, the absolute value of the integral on the left is bounded from above by tx = 1 /t , so the integral on the left in (3.1) tends to 0 as . Since arctan π/ 2 as , equation (3.1) as becomes 0 = , so π/ 2. Feeding this back into (3.1), (3.2) tx sin arctan for t> If we let in (3.2), this equation suggests that (3.3) sin which is true and it is important in signal processing and Fourier analysis. It is a delicate matter to derive (3.3) from (3.2) since the integral in (3.3) is not absolutely convergent. Details are provided in an appendix. 4. The Gaussian integral The improper integral formula (4.1) is fundamental to probability theory and Fourier analysis. The function is called a Gaussian, and (4.1) says the integral of the Gaussian over the whole real line is 1. The physicist Lord Kelvin (after whom the Kelvin temperature scale is named) once wrote (4.1) on the board in a class and said “A mathematician is one to whom that [pointing at the formula] is as obvious as twice two makes four is to you.” We will prove (4.1) using diﬀerentiation under the integral sign. The method will not make (4.1) as obvious as 2 2 = 4. If you take further courses you may learn more natural derivations of (4.1) so that the result really does become obvious. For now, just try to follow the argument here step-by-step. We are going to aim not at (4.1), but at an equivalent formula over the range 0: (4.2) For t> 0, set ) = We want to calculate ) and then take a square root.

Page 5

DIFFERENTIATING UNDER THE INTEGRAL SIGN 5 Diﬀerentiating with respect to ) = 2 = 2 x. Let ty , so ) = 2 te te (1+ y. The function under the integral sign is easily antidiﬀerentiated with respect to ) = ∂t (1+ 1 + dt (1+ 1 + y. Letting ) = (1+ 1 + x, we have ) = ) for all t> 0, so there is a constant such that (4.3) ) = ) + for all t> 0. To ﬁnd , we let in (4.3). The left side tends to ( = 0 while the right side tends to x/ (1 + ) + π/ 2 + . Thus π/ 2, so (4.3) becomes (1+ 1 + x. Letting here, we get ( π/ 2, so π/ 2. That is (4.2). 5. Higher moments of the Gaussian For every integer 0 we want to compute a formula for (5.1) x. (Integrals of the type ) d for = 0 ,... are called the moments of ), so (5.1) is the -th moment of the Gaussian.) When is odd, (5.1) vanishes since is an odd function. What if = 0 ,... is even? The ﬁrst case, = 0, is the Gaussian integral (4.1): (5.2) π. To get formulas for (5.1) when = 0, we follow the same strategy as our treatment of the factorial integral in Section 2: stick a into the exponent of and then diﬀerentiate repeatedly with respect to For t> 0, replacing with tx in (5.2) gives (5.3) tx Diﬀerentiate both sides of (5.3) with respect to , using diﬀerentiation under the integral sign on the left: tx

Page 6

6 KEITH CONRAD so (5.4) tx Diﬀerentiate both sides of (5.4) with respect to . After removing a common factor of 2 on both sides, we get (5.5) tx Diﬀerentiating both sides of (5.5) with respect to a few more times, we get tx tx and 10 tx 11 Quite generally, when is even tx ··· 1) n/ where the numerator is the product of the positive odd integers from 1 to 1 (understood to be the empty product 1 when = 0). In particular, taking = 1 we have computed (5.1): = 1 ··· 1) π. As an application of (5.4), we now compute ( )! := , where the notation ( )! and its deﬁnition are inspired by Euler’s integral formula (2.1) for ! when is a nonnegative integer. Using the substitution in , we have ! = ue (2 ) d = 2 by (5 4) at = 2

Page 7

DIFFERENTIATING UNDER THE INTEGRAL SIGN 7 6. A cosine transform of the Gaussian We are going to compute ) = cos( tx by looking at its -derivative: (6.1) ) = sin( tx x. This is good from the viewpoint of integration by parts since xe is the derivative of So we apply integration by parts to (6.1): = sin( tx xe and cos( tx ) d x, v Then ) = uv sin( tx =0 cos( tx sin( tx =0 tF As blows up while sin( tx ) stays bounded, so sin( tx /e goes to 0. Therefore ) = tF We know the solutions to this diﬀerential equation: constant multiples of . So cos( tx Ce for some constant . To ﬁnd , set = 0. The left side is , which is π/ 2 by (4.2). The right side is . Thus π/ 2, so we are done: for all real cos( tx Remark 6.1. If we want to compute ) = sin( tx , with sin( tx ) in place of cos( tx ), then in place of ) = tF ) we have ) = 1 tG ), and (0) = 0. From the diﬀerential equa- tion, ( )) , so ) = . So while cos( tx the integral sin( tx is impossible to express in terms of elementary functions.

Page 8

8 KEITH CONRAD 7. Logs in the denominator, part I Consider the following integral over [0 1], where t> 0: log x. Since 1 log 0 as , the integrand vanishes at = 0. As , ( 1) log Therefore when is ﬁxed the integrand is a continuous function of on [0 1], so the integral is not an improper integral. The -derivative of this integral is log log + 1 which we recognize as the -derivative of log( + 1). Therefore log = log( + 1) + for some . To ﬁnd , let . On the right side, log(1 + ) tends to 0. On the left side, the integrand tends to 0: 1) log log 1) log | because |≤| when 0. Therefore the integral on the left tends to 0 as . So = 0, which implies (7.1) log = log( + 1) for all t> 0, and it’s obviously also true for = 0. Another way to compute this integral is to write log as a power series and integrate term by term, which is valid for 1. Under the change of variables , (7.1) becomes (7.2) +1) = log( + 1) 8. Logs in the denominator, part II We now consider the integral ) = log for t > 1. The integral converges by comparison with x/x . We know that “at = 1” the integral diverges to log = lim log = lim log log = lim log log log log 2 So we expect that as ) should blow up. But how does it blow up? By analyzing and then integrating back, we are going to show ) behaves essentially like log( 1) as

Page 9

DIFFERENTIATING UNDER THE INTEGRAL SIGN 9 Using diﬀerentiation under the integral sign, for t> ) = ∂t log log log +1 + 1 =2 We want to bound this derivative from above and below when t> 1. Then we will integrate to get bounds on the size of ). For t> 1, the diﬀerence 1 is negative, so 2 1. Dividing both sides of this by 1 , which is negative, reverses the sense of the inequality and gives This is a lower bound on ). To get an upper bound on ), we want to use a lower bound on 2 . Since + 1 for all (the graph of lies on or above its tangent line at = 0, which is + 1), log 2 (log 2) + 1 for all . Taking = 1 (8.1) 2 (log 2)(1 ) + 1 When t> 1, 1 is negative, so dividing (8.1) by 1 reverses the sense of the inequality: log 2 + This is an upper bound on ). Putting the upper and lower bounds on ) together, (8.2) log 2 + for all t> 1. We are concerned with the behavior of ) as . Let’s integrate (8.2) from to 2, where 2: ) d log 2 + t. Using the Fundamental Theorem of Calculus, log( 1) < F ((log 2) log( 1)) so log( 1) (2) (log 2)(2 ) + log( 1) Manipulating to get inequalities on ), we have (log 2)( 2) log( 1) + (2) log( 1) + (2)

Page 10

10 KEITH CONRAD Since 1 for 1 2, (log 2)( 2) is greater than log 2. This gives the bounds log( 1) + (2) log 2 log( 1) + (2) Writing as , we get log( 1) + (2) log 2 log( 1) + (2) so ) is a bounded distance from log( 1) when 1 2. In particular, as 9. Smoothly dividing by Let ) be an inﬁnitely diﬀerentiable function for all real such that (0) = 0. The ratio /t makes sense for = 0, and it also can be given a reasonable meaning at = 0: from the very deﬁnition of the derivative, when 0 we have (0) (0) Therefore the function ) = /t, if = 0 (0) if = 0 is continuous for all . We can see immediately from the deﬁnition of ) that it is better than continuous when = 0: it is inﬁnitely diﬀerentiable when = 0. The question we want to address is this: is ) inﬁnitely diﬀerentiable at = 0 too? If ) has a power series representation around = 0, then it is easy to show that ) is inﬁnitely diﬀerentiable at = 0 by working with the series for ). Indeed, write ) = ··· for all small . Here (0), 00 (0) 2! and so on. For small = 0, we divide by and get (9.1) ) = ··· which is a power series representation for ) for all small = 0. The value of the right side of (9.1) at = 0 is (0), which is also the deﬁned value of (0), so (9.1) is valid for all small (including = 0). Therefore ) has a power series representation around 0 (it’s just the power series for ) at 0 divided by ). Since functions with power series representations around a point are inﬁnitely diﬀerentiable at the point, ) is inﬁnitely diﬀerentiable at = 0. However, this is an incomplete answer to our question about the inﬁnite diﬀerentiability of at = 0 because we know by the key example of /t (at = 0) that a function can be inﬁnitely diﬀerentiable at a point without having a power series representation at the point. How are we going to show ) = /t is inﬁnitely diﬀerentiable at = 0 if we don’t have a power series to help us out? Might there actually be a counterexample? The solution is to write ) in a very clever way using diﬀerentiation under the integral sign. Start with ) = ) d u. (This is correct since (0) = 0.) For = 0, introduce the change of variables tx , so d At the boundary, if = 0 then = 0. If then = 1 (we can divide the equation tx by because = 0). Therefore ) = tx tx ) d x.

Page 11

DIFFERENTIATING UNDER THE INTEGRAL SIGN 11 Dividing by when = 0, we get ) = tx ) d x. The left and right sides don’t have any in the denominator. Are they equal at = 0 too? The left side at = 0 is (0) = (0). The right side is (0) d (0) too, so (9.2) ) = tx ) d for all , including = 0. This is a formula for /t where there is no longer a being divided! Now we’re set to use diﬀerentiation under the integral sign. The way we have set things up here, we want to diﬀerentiate with respect to ; the integration variable on the right is . We can use diﬀerentiation under the integral sign on (9.2) when the integrand is diﬀerentiable. Since the integrand is inﬁnitely diﬀerentiable, ) is inﬁnitely diﬀerentiable! Explicitly, ) = vh 00 tx ) d and 00 ) = vh 00 tx ) d and more generally ) = +1) tx ) d x. In particular, (0) = +1) (0) d +1) (0) +1 10. Counterexamples We have seen many examples where diﬀerentiation under the integral sign can be carried out with interesting results, but we have not actually stated conditions under which (1.2) is valid. Something does need to be checked. In [6], an incorrect use of diﬀerentiation under the integral sign due to Cauchy is discussed, where a divergent integral is evaluated as a ﬁnite expression. Here are two other examples where diﬀerentiation under the integral sign does not work. Example 10.1. It is pointed out in [3] that the formula sin which we discussed at the end of Section 3, can be rewritten as (10.1) sin( ty for any t> 0, by the change of variables ty . Then diﬀerentiation under the integral sign implies cos( ty ) d = 0 which doesn’t make sense. The next example shows that even if both sides of (1.2) make sense, they need not be equal.

Page 12

12 KEITH CONRAD Example 10.2. For any real numbers and , let x,t ) = xt if = 0 or = 0 if = 0 and = 0 Let ) = x,t ) d x. For instance, (0) = x, 0) d 0 d = 0. When = 0, ) = xt 1+ (where =1+ 2(1 + 2(1 + This formula also works at = 0, so ) = t/ (2(1 + )) for all . Therefore ) is diﬀerentiable and ) = 2(1 + for all . In particular, (0) = Now we compute ∂t x,t ) and then ∂t x,t ) d . Since (0 ,t ) = 0 for all (0 ,t ) is diﬀeren- tiable in and ∂t (0 ,t ) = 0. For = 0, x,t ) is diﬀerentiable in and ∂t x,t ) = (3 xt xt 2( )2 xt )(3( xt (3 Combining both cases ( = 0 and = 0), (10.2) ∂t x,t ) = xt (3 if = 0 if = 0 In particular ∂t =0 x,t ) = 0. Therefore at = 0 the left side of the “formula x,t ) d ∂t x,t ) d x. is (0) = 1 2 and the right side is ∂t =0 x,t ) d = 0. The two sides are unequal!

Page 13

DIFFERENTIATING UNDER THE INTEGRAL SIGN 13 The problem in this example is that ∂t x,t ) is not a continuous function of ( x,t ). Indeed, the denominator in the formula in (10.2) is ( , which has a problem near (0 0). Speciﬁcally, while this derivative vanishes at (0 0), it we let ( x,t (0 0) along the line , then on this line ∂t x,t ) has the value 1 (4 ), which does not tend to 0 as ( x,t (0 0). Theorem 10.3. The equation x,t ) d ∂t x,t ) d is valid at , in the sense that both sides exist and are equal, provided the following two conditions hold: x,t and ∂t x,t are continuous functions of two variables when is in the range of integration and is in some interval around there are upper bounds x,t | and ∂t x,t | , both being independent of such that ) d and ) d exist. Proof. See [4, pp. 337–339]. If the interval of integration is inﬁnite, ) d and ) d are improper. In Table 1 we include choices for ) and ) for each of the functions we have treated. Since the calculation of a derivative at a point only depends on an interval around the point, we have replaced a -range such as t> 0 with c> 0 in some cases to obtain choices for ) and ). Section x,t range range we want tx [0 c> cx +1 cx tx sin (0 c> cx cx (1+ 1+ [0 1] 1+ tx c> cx +2 cx cos( tx [0 all log (0 1] log log [2 c> t> log +1) tx [0 1] max | +1) max | +2) Table 1. Summary Corollary 10.4. If and are both diﬀerentiable, then x,t ) d ∂t x,t ) d ,t ,t if the following conditions are satisﬁed: there are α< and such that x,t and ∂t x,t are continuous on α, ,c we have α, and α, for all ,c there are upper bounds x,t | and ∂t x,t | for x,t α, ,c such that ) d and ) d exist.

Page 14

14 KEITH CONRAD Proof. This is a consequence of Theorem 10.3 and the chain rule for multivariable functions. Set a function of three variables t,a,b ) = x,t ) d for ( t,a,b ,c α, α, ]. (Here and are not functions of .) Then (10.3) ∂I ∂t t,a,b ) = ∂t x,t ) d x, ∂I ∂a t,a,b ) = a,t ∂I ∂b t,a,b ) = b,t where the ﬁrst formula follows from Theorem 10.3 (its hypotheses are satisﬁed for each and in [ α, ]) and the second and third formulas are the Fundamental Theorem of Calculus. For diﬀerentiable functions ) and ) with values in [ α, ] for , by the chain rule x,t ) d t,a ,b )) ∂I ∂t t,a ,b )) ∂I ∂a t,a ,b )) ∂I ∂b t,a ,b )) ∂f ∂t x,t ) d ,t ) + ,t ) by (10 3) A version of diﬀerentiation under the integral sign for a complex variable is in [5, pp. 392–393]. 11. An example needing a change of variables Our next example is taken from [1, pp. 78,84]. For all , we will show by diﬀerentiation under the integral sign that (11.1) cos( tx 1 + πe −| Here x,t ) = cos( tx (1 + ). Since x,t ) is continuous and x,t | (1 + ), the integral exists for all . The function πe −| is not diﬀerentiable at = 0, so we shouldn’t expect to be able to prove (11.1) at = 0 using diﬀerentiation under the integral sign; this special case can be treated with elementary calculus: 1 + = arctan π. The integral in (11.1) is an even function of , so to compute it for = 0 it suﬃces to treat the case t> 0. Let ) = cos( tx 1 + x. If we try to compute ) for t> 0 using diﬀerentiation under the integral sign, we get (11.2) ∂t cos( tx 1 + sin( tx 1 + x. Unfortunately, there is no upper bound ∂t x,t | ) that justiﬁes diﬀerentiating ) under the integral sign (or even justiﬁes that ) is diﬀerentiable). Indeed, when is near a large odd A reader who knows complex analysis can derive (11.1) for t > 0 by the residue theorem, viewing cos( tx ) as the real part of itx

Page 15

DIFFERENTIATING UNDER THE INTEGRAL SIGN 15 multiple of ( π/ 2) /t , the integrand in (11.2) has values that are approximately x/ (1 + /x which is not integrable for large . That does not mean (11.2) is actually false, although if we weren’t already told the answer on the right side of (11.1) then we might be suspicious about whether the integral is diﬀerentiable for all t > 0; after all, you can’t easily tell from the integral that it is not diﬀerentiable at = 0. Having already raised suspicions about (11.2), we can get something really crazy if we diﬀerentiate under the integral sign a second time: 00 cos( tx 1 + x. If this made sense then (11.3) 00 ) = + 1) cos( tx 1 + cos( tx ) d =??? All is not lost! Let’s make a change of variables . Fixing t> 0, set tx , so d and ) = cos 1 + /t cos y. This new integral will be accessible to diﬀerentiation under the integral sign. (Although the new integral is an odd function of while ) is an even function of , there is no contradiction because this new integral was derived only for t> 0.) Fix >c> 0. For c,c ), the integrand in cos is bounded above in absolute value by t/ ), which is independent of and integrable over . The -partial derivative of the integrand is ( )(cos , which is bounded above in absolute value by ( = 1 ), which is independent of and integrable over . This justiﬁes the use diﬀerentiation under the integral sign according to Theorem 10.3: for c , and hence for all t> 0 since we never speciﬁed or ) = ∂t cos cos y. We want to compute 00 ) using diﬀerentiation under the integral sign. For 0 , the -partial derivative of the integrand for ) is bounded above in absolute value by a function of that is independent of and integrable over (exercise), so for all t> 0 we have 00 ) = ∂t cos ∂t cos y. It turns out that ( /∂t )( t/ )) = /∂y )( t/ )), so 00 ) = ∂y cos y. Using integration by parts on this formula for 00 ) twice (starting with cos and d /∂y )( t/ )), we obtain 00 ) = ∂y sin cos

Page 16

16 KEITH CONRAD The equation 00 ) = ) is a second order linear ODE whose general solution is ae be , so (11.4) cos( tx 1 + ae be for all t> 0 and some real constants and . To determine and we look at the behavior of the integral in (11.4) as and as As , the integrand in (11.4) tends pointwise to 1 (1 + ), so we expect the integral tends to x/ (1 + ) = as . To justify this, we will bound the absolute value of the diﬀerence cos( tx 1 + 1 + cos( tx 1 + by an expression that is arbitrarily small as . For any N > 0, break up the integral over into the regions | and | . We have cos( tx 1 + | cos( tx 1 + | 1 + | 1 + | 1 + | 1 + + 4 arctan Taking suﬃciently large, we can make π/ arctan as small as we wish, and after doing that we can make the ﬁrst term as small as we wish by taking suﬃciently small. Returning to (11.4), letting we obtain , so (11.5) cos( tx 1 + ae + ( for all t> 0. Now let in (11.5). The integral tends to 0 by the Riemann–Lebesgue lemma from Fourier analysis, although we can explain this concretely in our special case: using integration by parts with = 1 (1 + ) and d = cos( tx ) d , we get cos( tx 1 + sin( tx (1 + x. The absolute value of the term on the right is bounded above by a constant divided by , which tends to 0 as . Therefore ae + ( 0 as . This forces = 0, which completes the proof that ) = πe for t> 0. 12. Exercises 1. From the formula tx sin arctan for t > 0, in Section 3, use a change of variables to obtain a formula for ax sin( bx when and are positive. Then use diﬀerentiation under the integral sign with respect to to ﬁnd a formula for ax cos( bx ) d when and are positive. (Diﬀerentiation under the integral sign with respect to will produce

Page 17

DIFFERENTIATING UNDER THE INTEGRAL SIGN 17 a formula for ax sin( bx ) d , but that would be circular in our approach since we used that integral in our derivation of the formula for tx sin .) 2. From the formula tx sin arctan for t> 0, the change of variables ay with a> 0 implies tay sin( ay arctan t, so the integral on the left is independent of and thus has -derivative 0. Diﬀerentiation under the integral sign, with respect to , implies tay (cos( ay sin( ay )) d = 0 Verify that this application of diﬀerentiation under the integral sign is valid when a> 0 and t> 0. What happens if = 0? 3. Show sin( tx + 1) (1 ) for t> 0 by justifying diﬀerentiation under the integral sign and using (11.1). 4. Prove that tx cos = log 1 + for t > 0. What happens to the integral as 5. Prove that tx = log for t> 0 by justifying diﬀerentiation under the integral sign. This is (7.2) for t> 1. Deduce that ax bx = log( b/a ) for positive and 6. In calculus textbooks, formulas for the indeﬁnite integrals sin and cos are derived recursively using integration by parts. Find formulas for these integrals when 4 using diﬀerentiation under the integral sign starting with the formulas cos( tx ) d sin( tx sin( tx ) d cos( tx for t> 0. 7. If you are familiar with integration of complex-valued functions, show iy for all . In other words, show the integral on the left side is independent of . (Hint: Use diﬀerentiation under the integral sign to compute the -derivative of the left side.)

Page 18

18 KEITH CONRAD Appendix A. Justifying passage to the limit in a sine integral In Section 3 we derived the equation (A.1) tx sin arctan for t> which by naive passage to the limit as suggests that (A.2) sin To prove (A.2) is correct, we will show sin exists and then show the diﬀerence (A.3) sin tx sin (1 tx sin tends to 0 as . The key in both cases is alternating series. On the interval [ kπ, + 1) ], where is an integer, we can write sin = ( 1) sin , so convergence of sin = lim sin is equivalent to convergence of the series +1) k sin 1) +1) k sin x. This is an alternating series in which the terms +1) k sin are monotonically decreasing: +1 +2) +1) sin +1) k sin( +1) k sin x By a simple estimate k for 1, so 0. Thus sin 1) converges. To show the right side of (A.3) tends to 0 as , we write it as an alternating series. Breaking up the interval of integration [0 ) into a union of intervals [ kπ, + 1) ] for 0, (A.4) (1 tx sin 1) where ) = +1) k (1 tx sin x. Since 1 tx 0 for t> 0 and x> 0, the series 1) ) is alternating. The upper bound tx 1 tells us k for 1, so 0 as . To show the terms ) are monotonically decreasing with , set this up as the inequality (A.5) +1 0 for t> Each ) is a function of for all , not just t > 0 (note ) only involves integration on a bounded interval). The diﬀerence +1 ) vanishes when = 0 (in fact both terms are then 0), and ) = +1) k tx sin for all by diﬀerentiation under the integral sign, so (A.5) would follow from the derivative inequality +1 0 for t> 0. By a change of variables in the integral for +1 ), +1 ) = +1) k sin( t +1) k ty sin y This completes the proof that the series in (A.4) for t> 0 satisﬁes the alternating series test.

Page 19

DIFFERENTIATING UNDER THE INTEGRAL SIGN 19 If we truncate the series 1) ) after the th term, the magnitude of the error is no greater than the absolute value of the next term: 1) ) = =0 1) ) + |≤| +1 | + 1) Since tx | tx =0 1) +1) (1 tx sin +1) + 1) π. Thus (1 tx sin + 1) + 1) For any ε> 0 we can make the second term at most ε/ 2 by a suitable choice of . Then the ﬁrst term is at most ε/ 2 for all small enough (depending on ), and that shows (A.3) tends to 0 as References [1] W. Appel, Mathematics for Physics and Physicists , Princeton Univ. Press, Princeton, 2007. [2] R. P. Feynman, Surely You’re Joking, Mr. Feynman! , Bantam, New York, 1985. [3] S. K. Goel and A. J. Zajta, “Parametric Integration Techniques”, Math. Mag. 62 (1989), 318–322. [4] S. Lang, Undergraduate Analysis , 2nd ed., Springer-Verlag, New York, 1997. [5] S. Lang, Complex Analysis , 3rd ed., Springer-Verlag, New York, 1993. [6] E. Talvila, “Some Divergent Trigonometric Integrals”, Amer. Math. Monthly 108 (2001), 432–436.