/
Slide  1 Comparing Several Means: ANOVA Slide  1 Comparing Several Means: ANOVA

Slide 1 Comparing Several Means: ANOVA - PowerPoint Presentation

alexa-scheidler
alexa-scheidler . @alexa-scheidler
Follow
347 views
Uploaded On 2018-09-25

Slide 1 Comparing Several Means: ANOVA - PPT Presentation

Understand the basic principles of ANOVA Why it is done What it tells us Theory of oneway independent ANOVA Following up an ANOVA Planned contrastscomparisons Choosing contrasts Coding contrasts ID: 679118

anova slide test means slide anova means test model calculate sum squares step regression grand variability compare values group

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Slide 1 Comparing Several Means: ANOVA" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Slide 1

Comparing Several Means: ANOVA

Understand the basic principles of ANOVA

Why it is done?

What it tells us?

Theory of one-way independent ANOVA

Following up an ANOVA

:

Planned contrasts/comparisons

Choosing contrasts

Coding contrasts

Post hoc

testsSlide2

Slide 2

When and Why

When we want to compare means we can use a

t

-test. This test has limitations:

You can compare only 2 means: often we would like to compare means from 3 or more groups.

It can be used only with one predictor/independent variable.

ANOVA

Compares several means.

Can be used when you have manipulated more than one independent variable.

It is an extension of regression (the general linear model).Slide3

ANOVA

Fisher: British statistician and geneticists. Introduced this analysis

Let us assume that we test four different feeds and want to see if the body weights in pigs changes using the different feeds.

We are going to test the effect of one

factor

– feed type. The analysis is termed a one-factor test or one-way ANOVA.

A population of pigs is assigned, at random to each of the four

treatments. To be specific there are four treatment levels.Parametric testSlide4

Why Not Use Lots of t

-Tests?

If we want to compare several means why don’t we compare pairs of means with

t

-tests?

Can’t look at several independent variables

Inflates the Type I error rate

Type one error rate = 1 – 0.95

nSlide5

Slide 5

What Does ANOVA Tell Us?

Null hypothesis:

Like a

t

-test, ANOVA tests the null hypothesis that the means are the same.

Experimental hypothesis:

The means differ.ANOVA is an omnibus testIt test for an overall difference between groups.

It tells us that the group means are different.

It doesn’t tell us exactly which means differ.Slide6

What Does ANOVA Tell Us?

If H0 is rejected, there is al least one difference among the four means.Slide7

ANOVA as RegressionSlide8

Placebo GroupSlide9

High Dose GroupSlide10

Low Dose GroupSlide11

Output from RegressionSlide12

Slide 12

Experiments vs. Correlation

ANOVA in regression:

Used to assess whether the regression model is good at predicting an outcome.

ANOVA in experiments:

Used to see whether experimental manipulations lead to differences in performance on an outcome.

By manipulating a predictor variable can we cause (and therefore predict) a change in behavior?

Same question is of interest in regression and experimental

maniupulations

:

In experiments we systematically manipulate the predictor, in regression we don’t.Slide13

Slide 13

Theory of ANOVA

We calculate how much variability there is between scores

Total sum of squares (SS

T

).

We then calculate how much of this variability can be explained by the model we fit to the data

How much variability is due to the experimental manipulation, model sum of squares (

SS

M

)...

… and how much cannot be explained

How much variability is due to individual differences in performance, residual sum of squares (

SS

R

). Slide14

Slide 14

Theory of ANOVA

We compare the amount of variability explained by the model (experiment), to the error in the model (individual differences)

This ratio is called the

F

-ratio.

If the model explains a lot more variability than it can’t explain, then the experimental manipulation has had a significant effect on the outcome.Slide15

Slide 15

Theory of ANOVA

If the experiment is successful, then the model will explain more variance than it can’t

SS

M

will be greater than SS

RSlide16

Slide 16

ANOVA by Hand

Testing the effects of Viagra on libido using three groups:

Placebo (sugar pill)

Low dose

viagra

High dose

viagraThe outcome/dependent variable (DV) was an objective measure of libido.Slide17

Slide 17

The DataSlide18
Slide19

Step 1: Calculate SST

** where the mean is the grand meanSlide20

Slide 20

Grand Mean

Total Sum of Squares (SS

T

):Slide21
Slide22

SS

T

= sum((observed – Grand Mean)

2

)

SS

T

= S2(N-1)Slide23

Slide 23

Degrees of Freedom

Degrees of freedom (

df

) are the number of values that are free to vary.

In general, the

df

are one less than the number of values used to calculate the SS.DF

Total

= N - 1

Slide24

Model Sum of Squares (SSM

):

Difference between the model estimate and the mean (or “Grand Mean”)Slide25

Slide 25

Grand Mean

Model Sum of Squares (SS

M

):Slide26

Slide 26

Step 2: Calculate SSMSlide27

Slide 27

Model Degrees of Freedom

How many values did we use to calculate SS

M

?

We used the 3 means.Slide28

Slide 28

Grand Mean

Residual Sum of Squares (SS

R

):

Df = 4

Df = 4

Df = 4Slide29

Step 3: Calculate SSR

SS

R

= sum([x

i

– x

i

]2)Slide30

Slide 30

Step 3: Calculate SSR

2.5Slide31

Slide 31

Residual Degrees of Freedom

How many values did we use to calculate SS

R

?

We used the 5 scores for each of the SS for each group.Slide32

Slide 32

Double Check

SS

T

= SS

R

+ SS

M43.74 = 20.14 + 23.60

DF

T

= DF

R

+ DF

M

14 = 2 + 12Slide33

Slide 33

Step 4: Calculate the Mean Squared ErrorSlide34

Slide 34

Step 5: Calculate the F-RatioSlide35

Slide 35

Step 6: Construct a Summary Table

Source

SS

df

MS

F

Model

20.14

2

10.067

5.12*

Residual

23.60

12

1.967

Total

43.74

14Slide36

Multiple-comparison tests

The anova that you examined is used to test the hypothesis that there is no difference in the sample means among k treatment levelsHowever we cannot conclude, after doing the test, which of the mean values are different from one-another.Slide37

Tukey Test

Tukey test – balanced, orthogonal designsStep one: is to arrange and number all five sample means in order of increasing magnitudeCalculate the pairwise difference in sample means.

We use a t-test “analog” to calculate a q-statisticSlide38

Tukey Test

S2 is the error mean sqare by

anova

computation

n is the of data in each of groups B and A

Remember this is a completely balanced design.Slide39

Start with the largest mean, vs. the smallest mean. Then when the first largest mean has been compared with increasingly large second means, use the second largest mean.

If the null hypothesis is accepted between two means then all other means within that range cannot be different.

Tukey Test