/
Introduction to CFA Introduction to CFA

Introduction to CFA - PowerPoint Presentation

giovanna-bartolotta
giovanna-bartolotta . @giovanna-bartolotta
Follow
368 views
Uploaded On 2016-05-24

Introduction to CFA - PPT Presentation

LEARNING OBJECTIVES Upon completing this chapter you should be able to do the following Distinguish between exploratory factor analysis and confirmatory factor analysis Assess the construct validity of a measurement model ID: 333235

model measurement factor validity measurement model validity factor construct cfa constructs factors stage analysis sem variables results identification formative

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Introduction to CFA" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Introduction to CFASlide2

LEARNING OBJECTIVES:Upon completing this chapter, you should be able to do the following:Distinguish between exploratory factor analysis and confirmatory factor analysis.Assess the construct validity of a measurement model.Know how to represent a measurement model using a path diagram.Understand the basic principles of statistical identification and know some of the primary causes of SEM identification problems.Understand the concept of fit as it applies to measurement models and be able to assess the fit of a confirmatory factor analysis model.Know how SEM can be used to compare results between groups. This includes assessing the cross-validation of a measurement model.

SEM – Confirmatory Factor AnalysisSlide3

What is it?

Why use it?

Confirmatory Factor Analysis OverviewSlide4

Confirmatory Factor Analysis . . .

is similar to EFA in some respects, but philosophically it is quite different. With CFA, the researcher must specify both the number of factors that exist within a set of variables and which factor each variable will load highly on before results can be computed. So the technique does not assign variables to factors. Instead the researcher must be able to make this assignment before any results can be obtained. SEM is then applied to test the extent to which a researcher’s a-priori pattern of factor loadings represents the actual data.

Confirmatory Factor Analysis DefinedSlide5

Review of and Contrast with Exploratory Factor Analysis

EFA (exploratory factor analysis) explores the data and provides the researcher with information about how many factors are needed to best represent the data. With EFA, all measured variables are related to every factor by a factor loading estimate. Simple structure results when each measured variable loads

highly

on only one factor and has smaller loadings on other factors (i.e., loadings < .4).

The distinctive feature of EFA is that the factors are derived from statistical results, not from theory, and so they can only be named after the factor analysis is performed. EFA can be conducted without knowing how many factors really exist or which variables belong with which constructs. In this respect, CFA and EFA are not the same.

Slide6

CFA and Construct Validity

One of the biggest advantages of CFA/SEM is its ability to assess the construct validity of a proposed measurement theory. Construct validity . . . is the extent to which a set of measured items actually reflect the theoretical latent construct they are designed to measure.

Construct validity is made up of

four important components

:

Convergent validity – three approaches:

Factor loadings.

Variance extracted.

Reliability.

Discriminant validity.

Nomological validity.

Face validity.Slide7

Rules of Thumb

Construct Validity: Convergent and Discriminant Validity Standardized loading estimates should be .5 or higher, and ideally .7 or higher.VE should be .5 or greater to suggest adequate convergent validity.

Construct reliability should be .7 or higher to indicate adequate convergence or internal consistency.

VE estimates for two factors also should be greater than the square of the correlation between the two factors to provide evidence of discriminant validity.Slide8

Confirmatory Factor Analysis Stages

Stage 1: Defining Individual Constructs Stage 2: Developing the Overall Measurement ModelStage 3: Designing a Study to Produce Empirical Results

Stage 4: Assessing the Measurement Model Validity

Stage 5: Specifying the Structural Model

Stage 6: Assessing Structural Model Validity

Note

: CFA involves stages 1 – 4 above. SEM is stages 5 and 6.Slide9

Stage 1: Defining Individual Constructs

List constructs that will comprise the measurement model.Determine if existing scales/constructs are available or can be modified to test your measurement model.

If existing scales/constructs are not available, then develop new scales.Slide10

Rules of Thumb

Defining Individual Constructs All constructs must display adequate construct validity, whether they are new scales or scales taken from previous research.

Even previously established scales should be carefully checked for content validity.

Experts should judge the items’ content for validity in the early stages of scale development.

When two items have virtually identical content, one should be dropped.

Items upon which the judges cannot agree should be dropped.

A pre-test should be used to purify measures prior to confirmatory testing.Slide11

Stage 2: Developing the Overall Measurement Model

Key Issues:

Unidimensionality

.

Measurement model.

Items per construct.

Identification

Reflective vs. formative measurement models.Slide12

Stage 2: A Measurement Model (and SEM)

A SEM diagram commonly has certain standard elements: latents are ellipses, indicators are rectangles, error and residual terms are circles, single-headed arrows are causal relations (note causality goes from a latent to its indicators), and double-headed arrows are correlations between indicators or between exogenous latents. Path coefficient values may be placed on the arrows from latents to indicators, or from one latent to another, or from an error term to an indicator, or from a residual term to a latent. Each endogenous variable (the one 'Dependent variable' in the model below) has an

error term

, sometimes called a

disturbance term

or residual error

, not to be confused with indicator error, e, associated with each indicator variable.

Measurement

ModelSlide13

Rules of Thumb

Developing the Overall Measurement Model In standard CFA applications testing a measurement theory, within and between error covariance terms should be fixed at zero and not estimated.

In standard CFA applications testing a measurement theory, all measured variables should be free to load only on one construct.

Latent constructs should be indicated by at least three measured variables, preferably four or more. In other words, latent factors should be statistically identified. Slide14

Rules of Thumb

Developing the Overall Measurement Model Formative factors are not latent and are not validated as are conventional reflective factors. Internal consistency and reliability are not important. The variables that make up a formative factor should explain the largest portion of variation in the formative construct itself and should relate highly to other constructs that are conceptually related (minimum correlation of .5):

Formative factors present greater difficulties with statistical identification.

Additional variables or constructs must be included along with a formative construct in order to achieve an over-identified model.

A formative factor should be represented by the entire population of items that form it. Therefore, items should not be dropped because of a low loading.

With reflective models, any item that is not expected to correlate highly with the other indicators of a factor should be deleted.Slide15

Rules of Thumb

Designing a Study to Provide Empirical Results The ‘scale’ of a latent construct can be set by either:

Fixing one loading and setting its value to 1, or

Fixing the construct variance and setting its value to 1.

Congeneric, reflective measurement models in which all constructs have at least three item indicators should be statistically identified.

The researcher should check for errors in the specification of the measurement model when identification problems are indicated.

Models with large samples (more than 300) that adhere to the three indicator rule generally do not produce Heywood cases.Slide16

Key Issues

:Measurement scales in CFA.SEM/CFA and sampling.

Specifying the model:

Which indicators belong to each construct?

Setting the scale to “1” for one indicator on each construct.

Issues in identification.

Problems in estimation:

Heywood cases.

Illogical standardized parameters.

Stage 3: Designing a Study to Produce Empirical ResultsSlide17

Identification

Recognizing Identification Problems:Very large standard errors.Inability to invert the information matrix (no solution can be found).

Wildly unreasonable estimates including negative error variances.

Unstable parameter estimates.Slide18

Stage 4: Assessing Measurement Validity

Key Issues:

Assessing fit:

GOF.

Construct validity.

Diagnosing problems:

Path estimates.

Standardized residuals.

Modification indices.

Specification search.Slide19

Rules of Thumb

Assessing Measurement Model Validity Loading estimates can be statistically significant but still be too low to qualify as a good item (standardized loadings below |.5|). In CFA, items with low loadings become candidates for deletion.

Completely standardized loadings above +1.0 or below -1.0 are out of the feasible range and can be an important indicator of some problem with the data.

Typically, standardized residuals less than |2.5| do not suggest a problem.

Standardized residuals greater than |4.0| suggest a potentially unacceptable degree of error that may call for the deletion of an offending item.

Standardized residuals between |2.5| and |4.0| deserve some attention, but may not suggest any changes to the model if no other problems are associated with those two items.Slide20

Rules of Thumb

Assessing Measurement Model Validity The researcher should use the modification indices only as a guideline for model improvements of those relationships that can theoretically be justified.

Specification searches based on purely empirical grounds are discouraged because they are inconsistent with the theoretical basis of CFA and SEM.

CFA results suggesting more than minor modification should be re-evaluated with a new data set. For instance, if more than two out of every 15 measured variables are deleted, then the modifications can not be considered minor.Slide21

CFA Learning Checkpoint

What is the difference between EFA and CFA?Describe the four stages of CFA.

What is the difference between reflective and formative measurement models?

What is “statistical identification” and how can it be avoided?

How do you decide if CFA is successful?Slide22

The End