The harmonic series is the 64257rst nontrivial divergent series we encounter We learn that although the individual terms 1 j converge to zero together they accumulate so that their sum is in64257nite 1 In contrast we also learn that the alternating ID: 24698 Download Pdf

The harmonic series is the 64257rst nontrivial divergent series we encounter We learn that although the individual terms 1 j converge to zero together they accumulate so that their sum is in64257nite 1 In contrast we also learn that the alternating

Download Pdf

Download Pdf - The PPT/PDF document "Random Harmonic Series Byron Schmuland ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.

Page 1

Random Harmonic Series Byron Schmuland 1 Introduction. The harmonic series is the ﬁrst nontrivial divergent series we encounter. We learn that, although the individual terms 1 /j converge to zero, together they accumulate so that their sum is inﬁnite: 1+ In contrast, we also learn that the alternating harmonic series converges; in fact, +( 1( +1 =ln2 Here the positive and negative terms partly cancel, allowing the series to converge. To a probabilist, this alternating series suggests choosing plus and mi+ nus signs at random, by tossing a fair coin. ,ormally, let

( =1 be inde+ pendent random variables with common distribution =1(= 1(=1 2. Then, -olmogorov.s three series theorem /1, Theorem 22.01 or the martingale convergence theorem /1, Theorem 32.41 shows that the sequence =1 /j converges almost surely. In this note, we investigate the distribution of the sum := =1 /j 2 Distribution of X. 4bviously, the distribution of is symmetric about 5, so the mean is zero. The second moment calculation (= =1 =1

Page 2

in tandem with the 7auchy+S chwarz inequality , shows that the average absolute value ( is no bigger than π/ 6=1 20222. 8xponential

moments provide even more information. Simple proper+ ties of the exponential function give, for all 5, (exp( tX (( = =1 (exp( t /j (( = =1 exp( t/j (+exp( t/j =1 exp( ( = exp( 12( ,or x> 5, :ar;ov.s inequality /1, (21.11(1 tells us that X>x inf exp( 12 tx ( = exp( / which shows that the probability of a very large sum is exceedingly small. 4n the other hand, we can show that it is never zero. Since =1 /j converges almost surely, given any δ> 5 we can choose so that j>N /j j δ/ 2( (1( whenever . Also, given any in we can select a nonrandom sequence ( =1 of plus ones and minus ones so

that =1 /j x. This is done by choosing plus signs until the partial sum exceeds for the ﬁrst time, then minus signs until the partial sum ﬁrst becomes smaller than then iterating this procedure. =et be so big that =1 /j j δ/ for all . Putting = max ,N , we have in view of (1( and the independence of the (1 2( (1 2( j>N /j j δ/ 2( ,..., j>N /j j δ/ 2( j This shows that the distribution of has full support on the real line, so there is no theoretical upper (or lower( bound on the random sum. In /3, sec. 2.21, -ent 8. :orrison also considers the distribution of the

randomvariable . Hisnumericalintegrationsuggeststhat hasadensity of the form in ,igure 1.

Page 3

....................................................... 15123 Figure 1. Density of =1 /j 5.55 5.52 5.15 5.12 5.25 5.22 =oo;ing at ,igure 1, it is easy to believe that has a smooth density with a ﬂat top. :orrison /3, p. A231 notes that the value of the density at 5 is Bsuspiciously close to 1C4,D and he also conEectures that its value at 2is 1C0. Fnfortunately, in trying to Eustify such claims, the approach to via the partial sums =1 /j does not oﬀer much of a foothold.

These partial sums are discrete random variables and do not have densities. After a brief interlude on coin tossing, in section 4 we ta;e an alternative approach to and in section 2 settle :orrison.s two conEectures. This was ﬁrst done by :orrison himself in an unpublished paper /41 in 1HH0. In section 6, we explain his proof as well. 3 Binary digits and coin tossing. An inﬁnite sequence of fair coin tosses can be modelled by selecting a ran+ dom number uniformly from the unit interval. This observation underlies much of :ar; -ac.s delightful monograph /21 but can also be found

in many probability texts in connection with Borel.s normal number theorem /1, sec. 11. This model is based on the nonterminating dyadic expansion =1

Page 4

of in /5 11, that is, ( (( =1 is the sequence of binary digits of .To avoid ambiguity we use the nonterminating expansion for ω> 5. ,or instance, 1 2= 5111 ... rather than 1 2= 1555 ... Ifweequip/5 11with=ebesguemeasure, thepoint issaidtobechosen uniformly from /5 11, in the sense that (= (2( for 5 1. 8quation (2( shows that there is no location bias in selecting , informally, every in /5 11 is equally li;ely to be chosen.

It follows /1, (1.H(1 that the random variables ( =1 are independent and have common distribution =5(= =1(=1 To recap, the binary digits of a uniformly chosen number from the unit interval act li;e a sequence of fair coin tosses. The transformation preserves uniformity but changes the underlying interval to / 11 and changes the coeﬃcients from zeros and ones to plus ones and minus ones. Thus the classical model of fair coin tosses by a uniform random number implies the following proposition. Proposition 1. If =1 are independent random variables with common distribution =1(= 1(=1 , then

the sum =1 has a uniform distribution on 11 4 Regrouping the series. Proposition 1 of the previous section shows that a sequence of discrete random variables can sum to a continuous random variable with a well+ ;nown density. We exploit this result by rewriting our sum =1 /j as follows: =: 12 12 =: 10 15 20 25 =:

Page 5

,or every 5, Proposition 1 implies that +1 =1 1) (2 +1) has a uniform distribution on / (2 +1( (2 + 1(1. Since the are deﬁned using distinct variables, they are independent as well. That is, we can write as the sum of independent uniform random variables. This

new series warrants a closer loo;, since regrouping a conditionally convergent series can give a diﬀerent sum. ,or example, the alternating harmonic series has value =1 2+1 4+ =ln2 but regrouping the series we get, for every 5, =(2 +1( (1 (=5 ,or the alternating sequence of plus and minus signs, we have =0 =uc;ily, this turns out to be a rare exception. It is not hard to see that you can group a ﬁnite number of without changing the sum , for instance, 10 15 and =( (+ 10 15 are both legitimate equations. That this grouping leaves the sum intact has nothing to do with randomness,

it wor;s for any sequence of plus and minus signs. So for any 5 we can write =( (+ where is the collection of indices not used in . The mean square diﬀerence satisﬁes /( =0 1= =2 +1

Page 6

so that =0 in mean square. 4n the other hand, the martingale convergence theorem shows that =0 =0 almost surely. Both mean square convergence and almost sure convergence imply convergence in probability, where we have almost surely unique limits. That is, =0 almost surely. 5 Densities and characteristic functions. The smoothness of any random variable is related to the decay at

inﬁnity of its characteristic function , deﬁned by (= (exp( itY ((. ,or instance,if isabsolutelyintegrableovertheline,then hasacontinuous density function given by the inversion formula /1, (26.25(1: (= exp( itx dt. (3( In addition, if dt< then the density is times continuously diﬀerentiable. ,or each 5, let and denote the density and the characteristic function of , respectively: (= (2 +1( 4if (2 +1( (2 +1(, 5 otherwise; (= sin(2 t/ (2 +1(( t/ (2 +1( The density of the partial sum is the convolution product , while the characteristic function of the partial sum is the

product Since , the characteristic functions converge /1, Theorem 26.31 to the characteristic function of , namely, (= =0 sin(2 t/ +1( t/ (2 +1(

Page 7

The powers of in the denominator show that has very strong decay at inﬁnity. Bounding using the ﬁrst +2factors gives j (2( +1(+1( +1 This shows that →j is integrable over ,so has continuous derivativesin( (. Thisistrueforall , ensuringthat hasasmooth density function The inversion formula (3( gives exp( itx (( (( dt dt. Since (, j≤j , and is integrable, the dominated convergence theorem shows that converges

to uniformly on The densities can be calculated explicitly using the convolution scheme (=1 2] ,g (= dy (4( for 1, so we now have the tools to study the limit density . Kote that the property of being symmetric about 5 and nonincreasing on /5 (is closed under convolution. Therefore the functions all share this property (see ,igures 2L4(, as do the functions used in proving Proposition 3. This observation lies at the heart of both our proofs.

Page 8

...............................................................................................................................

............................................................................................................................... ............................................................................................... 15123 Figure 2. Density of 5.55 5.52 5.15 5.12 5.25 5.22 ............................................................................................................................... ......................................................................................................... ...... 15123 Figure 3. Density of 5.55 5.52 5.15 5.12 5.25 5.22

............................................................................................................................... ....................................... 15123 Figure 4. Density of 5.55 5.52 5.15 5.12 5.25 5.22

Page 9

Proposition 2. The value (5( is strictly less than 1/4. Proof. 8ach of the densities ta;es its maximum value at = 5. Fsing (4( we calculate (5( = (5 dy (5( dy (5( so that (5( (5(. We note also that 4 for all in and 5. Therefore, for 1, (5( can equal 1C4 only if (=1 for all in the support of the density . We will demonstrate that this happens only when 6.

We show by induction: 4 if and only if =1 (2 +1( (2( Direct inspection shows that this is true for = 5, when, as usual, an Bempty sumD means 5. Suppose that (2( is true for . 7onvolution gives (= +1 +2 (2 +1) (2 +1) dy. If =1 (2 +1(, then the interval (2 +1( ,x +2 (2 +1(1 ∩f is nonempty and 4; otherwise the interval is empty and (= 4. This completes the induction proof. Since =1 (2 +1( =1 (2 +1( we see that (5( 4= (5(, and conclude that (5( (5( 4. The next result is proved with similar ideas, but depends on the symmetry (or lac; thereof( of in a neighborhood of 2. ,or example, (2( is

exactly 1C0, and since (2 (2( is an odd function for near 5, the convolution of with a symmetric uniform distribution over a small interval will not change its value at 2. In this way we see that (2( is also

Page 10

equal to 1C0. 8ventually though, the neighborhood of BoddnessD is smaller than the support of the next convolution, and from that point on, (2( begins to decrease strictly. Proposition 3. The value (2( is strictly less than 1/8. Proof. ,or 5 deﬁne by (= (2+ (+ (2 (. Then (=1 4] 4+1 [0] and for 1. The functions are symmetric and non+ increasing on /5 ( so, as in the

proof of Proposition 2, we ﬁnd that ta;es its maximum value at = 5 and that (5(=2 (2( (2(. As in the proof of Proposition 2, induction shows that 4if and only if =1 (2 +1(. Since 56 =1 (2 +1( 55 =1 (2 +1( we see that 2 57 (2( = 57 (5( 4= 56 (5(=2 56 (2(, and conclude that (2( 57 (2( 0. 6 Morrison’s proof. ,or comparison, let.s loo; at :orrison.s proof of Propositions 2and 3. These are found on pages 13 and 14, at the end of section 2, of /41. Ineﬀect,:orrisondecomposes into plusaremainder := =1 Then , where is the density of ,so (= dy =(1 4( +2( (6( The argument in section 2shows

that has support on the whole real line, so that 2( 1 and hence (5( 4. Similarly (2(=(1 4( (5 4(=(1 0( 4( 15

Page 11

......................................................................................................................... ... .. ......................................................................................................................... 15123 Figure 5. Density of 5.5 5.1 5.2 5.3 5.4 5.2 5.6 5.A Although has full support on the real line, the density in ,igure 2 shows that both 2( and 4( are nearly equal to 1. This explains why (5( is so close to 1C4 and

(2( so close to 1C0. 7 Numerical results. In approximating by the partial sum we replace the tail +1 with zero. 4f course, the tail is not exactly zero; in fact, a glance at ,igure 2 hints that, for = 5, the tail is close to a normal random variable. Indeed, it is easy to pursue this hint and rigorously prove a central limit theorem: +1 Z, where = Mar( +1 (=(4 3( +1 (2 +1( and isastandard normal random variable. That is, the tail is close to a normal random variable with variance . This suggests using the approximation , which practice shows to be superior to . ,or instance, even with = 5,

this gives a density function already impressively close to the limit. To ten decimal places, this density has value 24HH1253H3 at = 5, and 1225555555 at = 2(see ,igure 6(. 11

Page 12

............................................. 15123 Figure 6. Density of Z. 5.55 5.52 5.15 5.12 5.25 5.22 In terms of the characteristic functions, using normal tails means approx+ imating (by (exp( 2( rather than (. ,or the symmetric random variable , the inversion formula (3( gives (= cos( xt dt cos( xt (exp( 2( dt. With = 125, we integrated from =5to = 12 using a Riemann sum with dt =5 52and the

midpoints of the subintervals for the points of evaluation. We determined that this is accurate to ten decimal places, giving (5( = 24HHH43H20 and (2( = 1225555555. 8 Other random sums. Replacing the tail of a series by an appropriate normal random variable is a good way of investigating other random sums. ,or example, the random sum =1 /j has the smooth density pictured in ,igure A. The lumps in the distribution come from the ﬁrst three choices of random sign, while the remaining part of the random sum is essentially determined by a normal tail random variable. 12

Page 13

......... .. .. ............................................................................................................................... .................................................... .. .. .. ......... 2 5.5 5.2 1.5 1.2 Figure 7. Density of =1 /j 5.5 5.1 5.2 5.3 5.4 5.2 5.6 We conclude with some exercises and further food for thought. 1. ,ind a nonrandom sequence ( =1 of plus ones and minus ones with =1 =1] /n 2, but =1 /j . Balancing the plus and minus signs does not guarantee convergence. 2. Prove that =1 /j has a smooth density. The regrouping tric; doesn.t wor; here. 3. Fse

:orrison.s formula (6( to show that (5(=(1 2( (2(. Argue that is strictly decreasing on (5 ( and therefore (5( 5. The density does not have a ﬂat top. 4. Investigate the distribution of the random sum =1 References /11 P. Billingsley, Probability and Measure, 2nd ed. Nohn Wiley O Sons, Kew Por;, 1H06. /21 :. -ac, Statistical Independence in Probability, Analysis and Number Theory, 7arus :athematical :onographs, no. 12, :athematical Asso+ ciation of America, Washington, D.7., 1H2H. 13

Page 14

/31 -.8.:orrison, 7osineproducts, ,ouriertransforms, andrandomsums, Amer. Math.

Monthly 152(1HH2( A16LA24. /41 -. 8. :orrison, The ﬁnal resting place of a fatigued random wal;er, unpublished manuscript, 1HH0. BYRON SCHMULAND teachesprobabilityandstatisticsattheFniversity of Alberta. His research is mainly about :ar;ov processes, but he li;es wor;ing on all ;inds of mathematical problems. In his leisure, he struggles through language classes in -orean and :andarin 7hinese, and enEoys listening to contemporary surf music. Department of Mathematical and Statistical Sciences University of Alberta Edmonton, Alberta, Canada T6G 2G1 schmu@stat.ualberta.ca 14

Â© 2020 docslides.com Inc.

All rights reserved.