/
Validity of Selection Measures Validity of Selection Measures

Validity of Selection Measures - PowerPoint Presentation

aaron
aaron . @aaron
Follow
393 views
Uploaded On 2017-05-06

Validity of Selection Measures - PPT Presentation

Part 2 Foundations of Measurement for Human Resource Selection CHAPTER 5 2011 Cengage Learning All Rights Reserved May not be scanned copied or duplicated or posted to a publicly accessible website in whole or in part ID: 545348

validity job posted part job validity part posted reserved website accessible publicly duplicated copied scanned 2011 cengage learning rights

Share:

Link:

Embed:

Download Presentation from below link

Download Presentation The PPT/PDF document "Validity of Selection Measures" is the property of its rightful owner. Permission is granted to download and print the materials on this web site for personal, non-commercial use only, and to display it on your personal computer provided you do not modify the materials and that you retain all copyright notices contained in the materials. By downloading content from our website, you accept the terms of this agreement.


Presentation Transcript

Slide1

Validity of Selection Measures

Part 2

Foundations of Measurement for Human Resource Selection

CHAPTER

5Slide2

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

An Overview of Validity

Validity: A Definition

The degree to which available evidence supports inferences made from scores on selection measures.

Validation

The research process of discovering what and how well a selection procedure measures

Importance of Validity in Selection

Shows how well a predictor (such as a test) is related to important job performance criteriaCan indicate what types of inferences may be made from scores obtained on the measurement device

5–

2Slide3

An Overview of Validity (cont’d)

The Relation between Reliability and Validity

High reliability without validity is possibleHigh validity with reliability is not possibleQuantitatively, the relationship between validity and reliability is

5–

3

where

r

xy

=

maximum possible

correlation between predictor

X

and criterion Y (the validity coefficient)rxx = reliability coefficient of predictor Xryy = reliability coefficient of criterion Y.

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.Slide4

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

An Overview of Validity (cont’d)

Validation Study

Provides evidence for determining the accuracy of judgments made from scores on a predictor about important job behaviors as represented by a criterion.

Types of Validation Strategies

Content validation

Criterion-related validation

Concurrent and predictive strategiesConstruct validation

5–

4Slide5

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Content Validation Strategy

Content Validity

Is shown when the content (items, questions, etc.) of a selection measure representatively samples the content of the job for which the measure will be used.

Why Content Validation?

Is applicable to hiring situations involving a small number of applicants for a position

Focuses on job content, not job success criteria

Increases applicant perception of the fairness of selection procedures

5–

5Slide6

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Content Validation Strategy (cont’d)

“Content of the job”

Job behaviors, the associated knowledge, skills, abilities, and other personal characteristics (KSAs) that are necessary for effective job performance

Job Content Domain

The behaviors and KSAs that compose the content of the job to be assessed

Content of the measure must be representative of the job content domain for the measure to possess content validity

5–

6Slide7

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Content Validation Strategy (cont’d)

Content Validation Strategy

Involves construction of a new measure rather than the validation of an existing one.

Emphasizes the role of expert judgment in determining the validity of a measure rather than relying on statistical methods

Face Validity

Concerns the appearance to job applicants of whether a measure is measuring what is intended—the appearance to applicants taking the test that the test is related to the job.

Increases acceptability of a measure

5–

7Slide8

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Major Aspects of Content Validation

Conduct of a Comprehensive Job Analysis

A description of the tasks performed on the job

Measures of the criticality and/or importance of the tasks

A specification of KSAs required to perform these critical tasks

Measures of the criticality and/or importance of KSAs

Linkage of important job tasks to important KSAsSelection of Experts Participating in Study

Specification of Selection Measure Content

Selection Procedure as a Whole

Item-by-Item Analysis

Supplementary Indications of Content Validity

Assessment of Selection Measure and Job Content Relevance

5–8Slide9

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

9

FIGURE 5.1 Example Tasks, KSAs, and Selection Measures for Assessing KSAs Of Typist-Clerks

NOTE: A check mark (✔) indicates that a KSA is required to perform a specific job task. KSA selection measures are those developed to assess particular KSAs.Slide10

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

10

FIGURE 5.2 Depiction of the Inference Points from Analysis of Job Content to Selection Measure Content in Content ValidationSlide11

Uniform Guidelines

and Content Validity

Content validation is not appropriate when:

Mental processes, psychological constructs, or personality traits are not directly observable but inferred from the selection device.

The selection procedure involves KSAs an employee is expected to learn on the job.

The content of the selection device does not resemble a work behavior; when the setting and administration of the selection procedure does not resemble the work setting.

5–

11

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.Slide12

Content Validity versus Criterion-related Validity

Content validity

Focuses on the selection measure itself

Is based on a broader base of data and inference

Is generally characterized as using broader, more judgmental descriptors (description)

Criterion-related validity

Focuses on an external variable

Is narrowly based on a specific set of data

Is couched in terms of precise quantitative indices (prediction)

5–

12

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.Slide13

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Criterion-Related Validation Strategies

Concurrent Validation Strategy

Both predictor and criterion data is obtained on a current group of employees, and statistical procedures are used to test for a statistically significant relationship (correlation coefficient) between these two sources of data

Sometimes referred to as the “present employee method” because data is collected for a current group of employees

5–

13Slide14

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

14

NOTE:

The numbers shown are mean ratings given by subject matter experts used in the analysis of the industrial electrician’s job. High ratings indicate that a particular KSA is relevant to the successful performance of a critical job task. “Yes” indicates that a test appears to be useful in assessing a particular KSA.

FIGURE 5.3

Selection of Experimental Ability Tests to Predict Important KSAs for the Job of Industrial ElectricianSlide15

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

15

FIGURE 5.4 Representation of Relating Predictor Scores with Criterion Data to Test for ValiditySlide16

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Concurrent Criterion-Related Validation

Sampling and Other Factors Affecting Validation

Differences in job tenure or length of employment of the employees who participate in the study

The representativeness (or unrepresentativeness) of present employees to job applicants

Certain individuals missing from the validation study

The motivation of employees to participate in the study or employee manipulation of answers to some selection predictors

5–

16Slide17

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

17

TABLE 5.1 Summary of Major Steps Undertaken in Conducting Concurrent and Predictive Validation Studies

Concurrent Validation

Conduct analyses of the job.

Determine relevant KSAs and other characteristics required to perform the job successfully.

Choose or develop the experimental predictors of these KSAs.

Select criteria of job success.

Administer predictors to

current employees

and collect criterion data.

Analyze predictor and criterion data relationships.

Predictive ValidationConduct analyses of the job.Determine relevant KSAs and other characteristics required to perform the job successfully.Choose or develop the experimental predictors of these KSAs.Select criteria of job success.Administer predictors to job applicants and file results.After passage of a suitable period of time, collect criterion data.Analyze predictor and criterion data relationships.Slide18

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

18

FIGURE 5.5 Examples of Different Predictive Validation Designs

SOURCE: Based on Robert M. Guion and C. J. Cranny, “A Note on Concurrent and Predictive Validity Designs: A Critical Reanalysis,”

Journal of Applied Psychology

67 (1982), 240; and Frank J. Landy,

Psychology of Work Behavior

(Homewood, IL: Dorsey Press, 1985), 65.Slide19

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Requirements for a Criterion-Related Validation Study

The job should be reasonably stable and not in a period of change or transition.

A relevant, reliable criterion that is free from contamination must be available or feasible to develop.

It must be possible to base the validation study on a sample of people and jobs that is representative of people and jobs to which the results will be generalized.

A large enough, and representative, sample of people on whom both predictor and criterion data have been collected must be available.

5–

19Slide20

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

20

TABLE 5.2 Basic Considerations for Determining the Feasibility of Content and Criterion-Related Validation Strategies

Content Validation

Must be able to obtain a complete, documented analysis of each of the jobs for which the validation study is being conducted, which is used to identify the content domain of the job under study.

Applicable when a selection device purports to measure existing job skills, knowledge, or behavior. Inference is that content of the selection device measures content of the job.

Although not necessarily required, should be able to show that a criterion-related methodology is not feasible.

Inferential leap from content of the selection device to job content should be a small one.

Most likely to be viewed as suitable when skills and knowledge for doing a job are being measured.

Not suitable when abstract mental processes, constructs, or traits are being measured or inferred.

May

not provide sufficient validation evidence when applicants are being ranked.

A substantial amount of the critical job behaviors and KSAs should be represented in the selection measure.SOURCE: Based on Society of Industrial and Organizational Psychology, Principles for the Validation and Use of Personnel Selection Procedures, 4th ed. (Bowling Green, OH: Author, 2003); Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, and Department of Justice, Adoption of Four Agencies of Uniform Guidelines on Employee Selection Procedures; 43 Federal Register 38, 295, 300, 301, 303 (August 25, 1989); and Robert M. Guion, Personnel Testing (New York: McGraw-Hill, 1965).Slide21

Criterion-Related Validation

a

Must be able to assume the job is reasonably stable and not undergoing change or evolution.

Must be able to obtain a relevant, reliable, and uncontaminated measure of job performance (that is, a criterion). Should be based as much as possible on a sample that is representative of the people and jobs to which the results are to be generalized.

Should have adequate statistical power in order to identify a predictor-criterion relationship if one exists. To do so, must have:

a. adequate sample size;

b. variance or individual differences in scores on the selection measure and criterion.

Must be able to obtain a complete analysis of each of the jobs for which the validation study is being conducted. Used to justify the predictors and criteria being studied.

Must be able to infer that performance on the selection measure can predict future job performance.

Must have ample resources in terms of time, staff, and money.

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

21

TABLE 5.2 Basic Considerations for Determining the Feasibility of Content and Criterion-Related Validation Strategies (cont’d)SOURCE: Based on Society of Industrial and Organizational Psychology, Principles for the Validation and Use of Personnel Selection Procedures, 4th ed. (Bowling Green, OH: Author, 2003); Equal Employment Opportunity Commission, Civil Service Commission, Department of Labor, and Department of Justice, Adoption of Four Agencies of Uniform Guidelines on Employee Selection Procedures; 43 Federal Register 38, 295, 300, 301, 303 (August 25, 1989); and Robert M. Guion, Personnel Testing (New York: McGraw-Hill, 1965).aCriterion-related validation includes both concurrent and predictive validity.Slide22

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Construct Validation Strategy

Construct

A postulated concept, attribute, characteristic, or quality thought to be assessed by a measure.

Construct Validation

A research process involving the collection of evidence used to test hypotheses about relationships between measures and their constructs.

5–

22Slide23

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

23

FIGURE 5.6 A Hypothetical Example of Construct Validation of the Link Between Working as a Team Member (Construct) and the Teamwork Inventory (Indicant)Slide24

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Steps in A Construction Validation Study

The construct must be carefully defined and hypotheses formed concerning the relationships between the construct and other variables.

A measure hypothesized to assess the construct is developed.

Studies testing the hypothesized relationships (formed in step 1) between the constructed measure and other, relevant variables are conducted.

5–

24Slide25

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Measuring a Construct

Intercorrelations among the measure’s parts should show if the parts cluster into one or more groupings.

Parts of the measure belonging to the same grouping should be internally consistent or reliable.

Different measures assessing the same construct as the developed measure should be related with the developed measure.

Content validity studies should show how experts have judged the manner in which parts of the measure were developed and how these parts of the measure sampled the job content domain.

5–

25Slide26

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Empirical Considerations in Criterion-related

Validation Strategies

Is there a relationship between applicants’ responses to our selection measure and their performance on the job?

If so, is the relationship strong enough to warrant the measure’s use in employment decision making?

5–

26Slide27

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

27

NOTE:

Sales Ability Inventory Score (high scores)

Greater sales ability.

Job Performance Rating (high scores)

Greater job performance of sales personnel as judged by sales supervisors.

TABLE 5.3 Hypothetical Inventory Score and Job Performance Rating Data Collected on 20 SalespeopleSlide28

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

28

FIGURE 5.7 Hypothetical Scattergram of Sales Ability Inventory Scores and Job Performance Ratings for 20 SalespeopleSlide29

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

29

FIGURE 5.8 Description of Possible Predictor/Criterion Relationships of a Validity CoefficientSlide30

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Importance of Large Sample Sizes

A validity coefficient of a small sample must be higher in value to be considered statistically significant than a validity coefficient of a large sample.

A validity coefficient of a small sample is less reliable than one based on a large sample.

The chances of finding that a predictor is valid when the predictor is actually or truly valid is lower for small sample sizes than for large ones.

5–

30Slide31

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Interpreting Validity Coefficients

“What precisely does a statistically significant coefficient mean?”

Coefficient of determination

The percentage of variance in the criterion that can be explained by variance associated with the predictor.

Expectancy tables and charts

Utility analysis

An economic interpretation to the meaning of a validity coefficient

5–

31Slide32

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Prediction

Linear Regression

The determination of how changes in criterion scores are functionally related to changes in predictor scores

Regression Equation

Mathematically describes the functional relationship between the predictor and criterion

Simple regression assumes only one predictor

Multiple regression assumes two or more predictors

5–

32Slide33

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

33

FIGURE 5.9 Hypothetical Plot of the Regression Line for Sales Ability Inventory Scores and Job Performance Ratings for 20 SalespeopleSlide34

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Cross-Validation

Cross-Validation

The testing of regression equations for any reduction (shrinkage) in their predictive accuracy prior to implementation in selection decision making

If the regression equation developed on one sample can predict scores in the other sample, then the regression equation is “cross-validated.”

Cross-Validation Methods

Empirical

Formula estimation

5–

34Slide35

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Cross-Validation (cont’d)

Empirical Cross-Validation (Split-Sample)

A group on whom predictor and criterion data are available is randomly divided into two groups.

A regression equation developed on one of the groups (the “weighting group”) is used to predict the criterion for the other group (“holdout group”).

Predicted criterion scores are obtained for each person in the holdout group.

For the holdout group, predicted criterion scores are then correlated with their actual criterion scores. A high, statistically significant correlation coefficient indicates that the regression equation is useful for individuals other than those on whom the equation was developed.

5–

35Slide36

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Expectancy Tables and Charts

Construction of Expectancy Tables and Charts

Individuals on whom criterion data are available are divided into two groups with roughly half of the individuals are in each group.

For each predictor score, frequencies of the number of employees in each group are determined.

The predictor score distribution is divided into fifths.

The number and percentage of individuals in each group are determined.

An expectancy chart that depicts these percentages is then prepared.

5–

36Slide37

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

37

FIGURE 5.10 Hypothetical Scattergram of Their Scores by 65 Financial Advisers on the Employment Interview Plotted against Job Performance RatingsSlide38

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

38

TABLE 5.4 Percentage of Financial Advisers Rated as Superior on the Job for Various Employment Interview Score RatingsSlide39

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Factors Affecting the Size of Validity Coefficients

5–

39

Violation of Statistical Assumptions

Reliability of Criterion and Predictor

Restriction of Range

Criterion Contamination

Validity CoefficientSlide40

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

40

FIGURE 5.11 Appropriateness of Pearson

r

for Various Predictor-Criterion RelationshipsSlide41

Utility Analysis

Utility Analysis: A Definition

Shows the degree to which use of a selection measure improves the quality of individuals selected versus what would have happened if the measure had not been used.

Uses dollars-and-cents terms as well as other measures such as percentage increases in output, to translate the results of a validation study into terms that are important to and understandable by managers.

5–

41

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.Slide42

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

42

FIGURE 5.12 Examples of Utility Analysis Computed under Different Selection Conditions

Example 1 Calculation:

Expected dollar payoff due to increased productivity per year for all employees hired =

Number selected X Test validity X

SD

of job performance X Average test score

– Number tested

X Cost of testing =

10(0.51)($12,000)(1.00)

–100($20) = $59,200Slide43

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Broader Perspectives of Validity

Validity Generalization

Uses evidence accumulated from multiple validation studies to show the extent to which a predictor that is valid in one setting is valid in other, similar settings.

Job Component Validity

A process of inferring the validity of a predictor, based on existing evidence, for a particular dimension or component of job performance.

5–

43Slide44

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Validity Generalization

Situational Specificity Hypothesis (Ghiselli)

The validity of a test is specific to the job or situation where the validation study was completed.

Deficiencies Affecting Validity

The use of small sample sizes (sampling error)

Differences in test or predictor reliability

Differences in criterion reliabilityDifferences in the restriction of range of scoresCriterion contamination and deficiency

Computational and typographical errors

Slight differences among tests thought to be measuring the same attributes or constructs

5–

44Slide45

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Validity Generalization (cont’d)

Validity Generalization Studies (Schmidt and Hunter)

Is the validity of a test generalizable from one situation to another that is very similar (that is, similar in terms of the same type of test and job on which the validity evidence had been accumulated)?

Conclusions

It is not necessary to conduct validity studies within each organization for every job

Mental ability tests can be expected to predict job performance in most, if not all, employment situations

5–

45Slide46

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Steps in Conducting Validity Generalization Studies

Obtain a large number of published and unpublished validation studies

Compute the average validity coefficient for these studies

Calculate the variance of differences among validity coefficients

From these differences, subtract the variance due to the effect of small sample size

Correct the average validity coefficient and the variance for errors due to other methodological deficiencies, test reliability, and restriction in range scores

If the differences among validity coefficients are small then validity is generalizable across situations

5–

46Slide47

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Criticisms of Validity Generalization

Appropriateness of correction formulas in overestimating variances due to study deficiencies

Low statistical power of generalization analyses and Type II errors of situational specificity (sampling error variance)

Procedures calling for double corrections

Lumping together good and bad studies

File drawer bias (unpublished studies with negative results)

Underestimation of criterion reliability by generalization advocates in their studies

5–

47Slide48

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Job Component Validity

Job Component Validity

Is a validation strategy that incorporates a standardized means for obtaining information on the job(s) for which a validation study is being conducted.

Involves inferring validity for a given job by analyzing the job, identifying the job’s major functions or components of work, and then choosing tests or other predictors—based on evidence obtained through previous research—that predict performance on these major work components.

5–

48Slide49

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Job Component Validity (cont’d)

Conducting a Job Component Validity Study

Conduct an analysis of the job using the Position Analysis Questionnaire (PAQ)

Identify the major components of work required on the job

Identify the attributes required for performing the major components of the job

Choose tests that measure the most important attributes identified from the PAQ analysis

5–

49Slide50

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Validation Options for Small Businesses

Options for Validating Predictors

Show that the measure used in selection assesses the same constructs as a measure used in a previous validity generalization study

Show that the jobs for which the measure is used are similar to those jobs in the validity generalization study

Demonstrate job component validity or some other form of synthetic validity

5–

50Slide51

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Synthetic Validity

Steps in Synthetic Validity

Analyzing jobs to identify their major components of work

Determining the relationships of selection predictors with these job components using content, construct, or criterion-related validity strategies

Choosing predictors to use in selection based on their relationships with important job components

5–

51Slide52

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

5–

52

FIGURE 5.13 Illustration of Test Validation Using Synthetic Validity

NOTE: X represents job dimensions characteristic of a job. Jobs sharing an X for the same dimension require the same job function. ✔ indicates the selection measure chosen for predicting success on a particular job dimension.Slide53

©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.

Key Terms and Concepts

Validity

Validation

Validation study

Content validity

Job content domain

Face validityCriterion-related validityConcurrent validityPredictive validity

Construct validity

Validity coefficients

5–

53

Coefficient of determination

Expectancy tablesLinear regressionSimple regressionMultiple regressionCross-validationUtility analysisValidity generalizationJob component validitySynthetic validity