/
Practical Statistics for Particle Physicists Practical Statistics for Particle Physicists

Practical Statistics for Particle Physicists - PowerPoint Presentation

neoiate
neoiate . @neoiate
Follow
350 views
Uploaded On 2020-06-16

Practical Statistics for Particle Physicists - PPT Presentation

Lecture 1 Harrison B Prosper Florida State University European School of HighEnergy Physics Parádfürdő Hungary 5 18 June 2013 1 Outline Lecture 1 Descriptive Statistics ID: 779471

likelihood probability statistics poisson probability likelihood poisson statistics binomial sample descriptive exercise amp distributions function binned expected average ensemble

Share:

Link:

Embed:

Download Presentation from below link

Download The PPT/PDF document "Practical Statistics for Particle Physic..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Practical Statistics for Particle PhysicistsLecture 1

Harrison B. ProsperFlorida State UniversityEuropean School of High-Energy PhysicsParádfürdő, Hungary 5 – 18 June, 2013

1

Slide2

OutlineLecture 1Descriptive Statistics

Probability & Likelihood Lecture 2The Frequentist Approach The Bayesian Approach Lecture 3

Analysis Example

2

Slide3

Descriptive Statistics

Slide4

Descriptive Statistics – 1Definition: A statistic

is any function of the data X.Given a sample X = x1, x2, … xN, it is often of interest to compute statistics such as the sample average

and the

sample variance

In any analysis, it is good practice to study

ensemble averages

, denoted by < … >, of relevant statistics

4

Slide5

Descriptive Statistics – 2Ensemble Average

MeanErrorBiasVarianceMean Square Error5

Slide6

6Descriptive Statistics –

3 The MSE is the most widely used measure of closeness of anensemble of statistics {x} to the true value μ The root mean square (RMS) is

Exercise 1

:

Show this

Slide7

Descriptive Statistics – 4Consider the ensemble average of the

sample variance7

Slide8

Descriptive Statistics – 5The ensemble average of the sample variance

has a negative bias of –V / N 8

Exercise 2

:

Show this

Slide9

Descriptive Statistics – SummaryThe sample average

is an unbiased estimateof the ensemble averageThe sample variance is a biased estimateof the ensemble variance9

Slide10

Probability

Slide11

11Probability – 1

Basic Rules 1. P(A) ≥ 0 2. P(A) = 1 if A is true 3. P(A) = 0 if A is falseSum Rule 4. P(A+B) = P(A) + P(B) if AB is false *Product Rule 5. P(AB) = P(A|B) P(B) *

*A+B = A or B, AB = A and B, A|B = A given that B is true

Slide12

Probability – 2

By definition, the conditional probability of A given B isP(A) is the probability of A withoutrestriction. P(A|B) is the probability of A when we

restric

t

to the conditions under

which

B

is true.

12

A

B

A

B

Slide13

Fromwe deduceBayes’ Theorem

:13A

B

A

B

Probability –

3

Slide14

A and B are mutually exclusive

if P(AB) = 0A and B are exhaustive if P(A) + P(B) = 1

Theorem

14

Probability –

4

Exercise 3

: Prove theorem

Slide15

Probability Binomial & Poisson Distributions

Slide16

Binomial & Poisson Distributions – 1A Bernoulli trial has two outcomes:

S = success or F = failure. Example: Each collision between protons at the LHC is a Bernoulli trial in which something interesting happens (S) or does not (F). 16

Slide17

Binomial & Poisson Distributions – 2Let p be the probability of a success, which is assumed to be the

same at each trial. Since S and F are exhaustive, the probability of a failure is 1 – p. For a given order O of n trails, the probability Pr(k,O|n) of exactly k successes and n – k failures is 17

Slide18

Binomial & Poisson Distributions – 3If the order O

of successes and failures is irrelevant, we can eliminate the order from the problem integrating over all possible ordersThis yields the binomial distributionwhich is sometimes written as 18

Slide19

Binomial & Poisson Distributions – 3We can prove that the mean number of successes a is

a = p n. Suppose that the probability, p, of a success is very small,then, in the limit p → 0 and n → ∞, such that a is constant, Binomial(k, n, p

)

Poisson(k

,

a

).

The Poisson distribution is generally regarded as a good model for a counting experiment

19

Exercise 5

: Show that

Binomial(

k

,

n, p

)

Poisson(

k

,

a

)

Exercise 4

: Prove it

Slide20

20Common Distributions and Densities

Slide21

Probability – What is it Exactly? 21

There are at least two interpretations of probability:Degree of belief in, or plausibility of, a propositionExample: It will snow in Geneva on Friday Relative frequency of outcomes in an infinite sequence of

identically repeated

trials

Example

:

trials: proton-proton collisions at the LHC

outcome: the creation of a Higgs boson

Slide22

Likelihood

Slide23

23Likelihood – 1

The likelihood function is simply the probability, or probability density function (pdf), evaluated at the observed data.Example 1: Top quark discovery (D0, 1995) p(D|

d

) =

Poisson(

D

|

d

) probability to get a count D

p

(17

|

d

) = Poisson(

17

|

d

)

likelihood

of observation

D

= 17

Poisson(

D

|

d

) = exp(-

d

)

d

D

/

D

!

Slide24

24Likelihood – 2

Example 2: Multiple counts Di with a fixedtotal count NThis is an example of

a

multi-

binned

likelihood

Slide25

25

Likelihood – 3Example 3: Red shift and distance modulus measurements of N = 580 Type Ia supernovae

This is an example of

an

un-binned

likelihood

Slide26

26Likelihood – 4

Example 4: Higgs to γγThe discovery of the neutral Higgs boson in the di-photon final state made use of an an un-binned likelihood,where x = di-photon masses m = mass of new particle

w

= width of resonance

s

= expected signal

b = expected background

f

s = signal model

f

b

= background model

Exercise

6

: Show that a

binned multi-Poisson

likelihood yields an

un-binned likelihood of

this form as the bin widths

go to zero

Slide27

27Likelihood – 5

Given the likelihood function we can answer questions such as:How do I estimate a parameter?How do I quantify its accuracy?How do I test an hypothesis?How do I quantify the significance of a result?Writing down the likelihood function requires:Identifying all that is known, e.g., the observationsIdentifying all that is unknown, e.g., the parameters

Constructing a probability model

for both

Slide28

28Likelihood – 6

Example: Top Quark Discovery (1995), D0 Results knowns: D = 17 events B = 3.8 ± 0.6 background events unknowns: b expected background count s expected signal count d = b + s expected event countNote: we are uncertain about unknowns, so 17 ± 4.1 is a statement about d,

not about the observed count

17

!

Slide29

Likelihood – 7Probability:

Likelihood: where B = Q / k δB = √Q / k

29

Slide30

30Summary

Statistic A statistic is any function of potential observationsProbabilityProbability is an abstraction that must be interpreted LikelihoodThe likelihood is the probability (or probability density) of potential observations

evaluated at the observed data

Slide31

TutorialsLocation:http://www.hep.fsu.edu/~harry/

ESHEP13Download tutorials.tar.gzand unpack tar zxvf tutorials.tar.gzNeed: Recent version of Root

linked with RooFit and TMVA

31