/
Hypothesis Testing: Minding your Ps and Qs and Error Rates Hypothesis Testing: Minding your Ps and Qs and Error Rates

Hypothesis Testing: Minding your Ps and Qs and Error Rates - PowerPoint Presentation

desha
desha . @desha
Follow
65 views
Uploaded On 2023-11-22

Hypothesis Testing: Minding your Ps and Qs and Error Rates - PPT Presentation

Bonnie HalpernFelsher PhD Hypothesis Testing Hypothesis testing is the key to our scientific inquiry In additional to research hypotheses need statistical hypotheses I nvolves the statement of a null hypothesis ID: 1034305

type hypothesis probability null hypothesis type null probability true level error reject chance alpha drug alternative power false statistical

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Hypothesis Testing: Minding your Ps and ..." is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

1. Hypothesis Testing:Minding your Ps and Qs and Error Rates Bonnie Halpern-Felsher, PhD

2. Hypothesis TestingHypothesis testing is the key to our scientific inquiry. In additional to research hypotheses, need statistical hypotheses.Involves the statement of a null hypothesis, an alternative hypothesis, and the selection of a level of significance.

3. Statistical HypothesesStatements of circumstances in the population that the statistical process will examine and decide the likely truth or validityStatistical hypotheses are discussed in terms of the population, not the sample, yet tested on samplesBased on the mathematical concept of probabilityNull HypothesisAlternative Hypothesis

4. Null HypothesisWhat is the Null Hypothesis?

5. Null HypothesisThe case when the two groups are equal; population means are the sameNull Hypothesis = H0This is the hypothesis actually being testedH0 is assumed to be true

6. Alternative HypothesisWhat is the Alternative Hypothesis?

7. Alternative HypothesisThe case when the two groups are not equal; when there is some treatment difference; when other possibilities existAlternative Hypothesis = H1 Or HaH1 is assumed to be true when the H0 is false.

8. Statistical HypothesesThe H0 and H1 must be mutually exclusiveThe H0 and H1 must be exhaustive; that is, no other possibilities can existThe H1 contains our research hypotheses

9. Statistical HypothesesCan you give an example of a Null and Alternative Hypothesis?

10. Null HypothesisH0There is no treatment effectThe drug has no effectH1There is a treatment effectThe drug had an (some, any) effect

11. Evaluation of the NullIn order to gain support for our research hypothesis, we must reject the Null HypothesisThereby concluding that the alternative hypothesis (likely) reflects what is going on in the population.You can never “prove” the Alternative Hypothesis!

12. Significance LevelNeed to decide on a Significance Level:The probability that the test statistic will reject the null hypothesis when the null hypothesis is trueSignificance is a property of the distribution of a test statistic, not of any particular draw of the statisticDetermines the Region of RejectionGenerally 5% or 1%

13. Alpha LevelThe value of alpha (α) is associated with the confidence level of our test; significance level. For results with a 90% level of confidence, the value of α is 1 - 0.90 = 0.10.For results with a 95% level of confidence, the value of alpha is 1 - 0.95 = 0.05.Typically set at 5% (.05) or 1% (.01)

14. p-valueThe p-value, or calculated probability, is the estimated probability of rejecting the null hypothesis (H0) of a study question when that hypothesis is trueProbability that the observed statistic occurred by chance alone

15.

16. Obtaining SignificanceCompare the values of alpha and the p-value. There are two possibilities that emerge:The p-value is less than or equal to alpha (e.g., p < .05). In this case we reject the null hypothesis. When this happens we say that the result is statistically significant. In other words, we are reasonably sure that there is something besides chance alone that gave us an observed sample.The p-value is greater than alpha (e.g., p > .05). In this case we fail to reject the H0. Therefore, not statistically significant. Observed data are likely due to chance alone.

17. ErrorIn an ideal world we would always reject the null hypothesis when it is false, and we would not reject the null hypothesis when it is indeed true. But there are two other scenarios that are possible, each of which will result in an errorType I and Type II Errors

18. Type I ErrorRejection of the null hypothesis that is actually trueSame as a “false positive”The alpha value gives us the probability of a Type I error. For example, α = .05 = 5% chance of rejecting a true null hypothesis

19. Type I ErrorExample:H0 = drug has no effectH1 = drug has an effectReject H0 and instead claim H1 is correct, so claim that the drug has an effect when indeed it does not. Therefore the drug is falsely claimed to have an effect.

20. Controlling Type I ErrorAlpha is the maximum probability of having a Type I error. E.g., 95% CI, chance of having a Type I is 5% Therefore, a 5% chance of rejecting H0 when H0 is trueThat is, 1 out of 20 hypotheses tested will result in Type I errorWe can control Type I error by setting a different α level.

21. Controlling Type I ErrorParticularly important to change α level to be more conservative if calculating several statistical tests and comparisons.We have a 5% chance of getting a significant result just by chance. So, if running 10 comparisons, should set a more conservative α level to control for Type I errorBonferroni correction: .05/10 = .005 α

22. Type II ErrorWe do not reject a null hypothesis that is false. Like a false negativeE.g., thought the drug had no effect, when it actually didThe probability of a Type II error is given by the Greek letter beta (β). This number is related to the power or sensitivity of the hypothesis test, denoted by 1 – β

23. Controlling Type II Error: PowerPower: The power of a test sometimes, less formally, refers to the probability of rejecting the null when it is not correct.Power = P(reject H0|H1 is true) = P(accept H1|H1 is true)As the power increases, the chances of a Type II error (false negative; β) decreases.Power = 1-βPower increases with sample size

24. Type I and II ErrorsH0H1H01-αβH1α1-βTrue/RealityDecision

25. Type I and II ErrorsH0H1H01-αβH1α1-βTrue/RealityDecisionThe probability of making an error where you “decide H1 but H0 is true” is the α, then the probability of being correct, given that H0 is true, is 1 – α. Similarly, the probability of making an error where you “decide H0 yet H1 is true” is β, therefore the probability of making a correct decision given that H1 is true is 1 - β

26. Questions????