/
Bayes Theorem Prior Probabilities Bayes Theorem Prior Probabilities

Bayes Theorem Prior Probabilities - PowerPoint Presentation

hailey
hailey . @hailey
Follow
68 views
Uploaded On 2023-07-08

Bayes Theorem Prior Probabilities - PPT Presentation

On way to party you ask Has Karl already had too many beers Your prior probabilities are 20 yes 80 no Prior Odds Omega The ratio of the two prior probabilities What new data would make you revise the priors ID: 1007068

distribution prior revised posterior prior distribution posterior revised data ratio alternative likelihood precision odds hypothesis probability null parameter beers

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Bayes Theorem Prior Probabilities" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Bayes Theorem

2. Prior ProbabilitiesOn way to party, you ask “Has Karl already had too many beers?”Your prior probabilities are 20% yes, 80% no.

3. Prior Odds, OmegaThe ratio of the two prior probabilitiesWhat new data would make you revise the priors?

4. Likelihood Ratio, LRIf I have had too many beers, there is a 30% likelihood that I will act awfully.If I have not had too many beers, the likelihood is only 3%.The likelihood ratio is

5. Multiplication Rule of ProbabilityThus

6. Addition Rule of ProbabilitySince B and Not B are mutually exclusive,Substituting this for the denominator of our previous expression,

7. Multiplication Rule AgainandNow substitute the right hand expressions in our previous expression, which was

8. Bayes TheoremYielding

9. Revising the Prior ProbabilityYou arrive at the party. Karl is behaving awfully.You revise your prior probability that Karl has had too many beers, obtaining a posterior probability.

10. A = Behaving awfully, B = had too many beersPrior ProbabilitiesLikelihoods

11. Posterior OddsGiven that Karl is behaving awfully,The probability that he has had too many beers is revised to .714.And the odds are revised from .25 to

12. Bayes Theorem RestatedThe posterior odds ratio = the product of the prior odds ratio and the likelihood ratio.2.5 = .25 x 10

13. Bayesian Hypothesis TestingH: IQ = 100.H1: IQ = 110.P(H) and P(H1) are prior probabilities. I’ll set both equal to .5.D is the obtained data.P(D| H) and P(D|H1) are the likelihoods.P(D| H) is a bit like the p value from classical hypothesis testing.

14. Compute Test StatisticsD: Sample of 25 scores, M = 107.Assume  = 15, so M = 3.Compute for each hypothesisFor H, z = 2.33For H1, z = -1Assume that z is normally distributed.

15. Obtain the Likelihoods and P(D)For each hypothesis, the likelihood is .5 times the probability density of z – we consider the null and the alternative equally likely.p(D|H0) = .5(.0264) = .0132p(D|H1) = .5(.2420) = .1210 Notice that P(D) is the denominator of the ratio in Bayes theorem.

16. Calculate Posterior ProbabilitiesP(H0|D) is what many researchers mistakenly think the traditional p value is. The traditional p is P(D|H0).

17. Calculate Posterior Odds.9023/.098 = 9.21.Given our data, the alternative hypothesis is more than 9 times more likely than the null hypothesis.Is this enough to persuade you to reject the null? No? Then let us gather more data.

18. Calculate Likelihoods and P(D)For a new sample of 25, M = 106.Z = 2 under the null, prob density .0540,z = -1.33 under the alternative, pd .1647.P(D|H0) is .0540/2 = .0270.P(D|H1) is .1647/2 = .0824..098 and .9023 are posterior probs from previous analysis, prior probs in the new analysis.

19. Revise the Probabilities, AgainWith the posterior probability of the null at .0344, we are likely comfortable rejecting it.

20. Newly Revised Posterior Odds.9656/.0344= 28.1. The alternative is more than 28 times more likely than the null.

21. The Alternative HypothesisNote that the alternative hypothesis here was exact,  = 110. How do we set it?Could be the prediction of an alternative theory.We could make it  = value most likely given the observed data (the sample mean).

22. P(H|D) and P(D|H)The P(H|D) is the probability that naïve researchers think they have when they compute a p value.What they really have is P(D or more extreme|H).So why don’t more researchers use Bayesian stats to get P(H|D) ?Traditionalists are uncomfortable with the subjectivity involved in setting prior probabilities.

23. Bayesian Confidence IntervalsParameters are thought of as random variables rather than constant in value.The distribution of a random variable represents our knowledge about what its true value may be.The wider that distribution, the greater our ignorance.

24. Precision (prc)The prc is the inverse of the variance of the distribution of the parameter.Thus, the greater the prc, the more we know about the parameter.For means, , so SEM2 = s2/N, and the inverse of SEM2 is N/s2 = precision.

25. Priors: Informative or Non-informativeWe may think of the prior distribution of the parameter as noninformativeAll possible values being equally likelyFor example, uniform distrib. From 0 to 1.Or uniform distribution from - to +Or as informativeSome values more likely than othersFor example, normal distribution with a certain mean.

26. Posterior Distributionof the ParameterWhen we receive new data, we revise the prior distribution of the parameter.We can construct a confidence interval from the posterior distribution.Example: We want to estimate .

27. Estimating We confess absolute ignorance about the value of , but are willing to assume a normal distribution for the parameter.We sample 100 scores.M = 107, s2 = 200.The Precision = the inverse of the squared standard error = n/s2.

28. 95% Bayesian Confidence Interval This is identical to the traditional CI.

29. New Data Become AvailableN = 81, M = 106, s2 = 243Precision = 81/243 = 1/3 = prcsampleOur prior distribution, the posterior distrib. from the first analysis, had M = 107, precision = ½The new posterior distribution will be characterized by a weighted combination of the prior distribution and the new data.

30. Revised 

31. Revised SEM2Revised precision = sum of prior and sample precisions.Revised SEM2 = inverse of revised precision = 1/.8333333 = 1.2

32. Revised Confidence Interval