Self-Correction of Wrong Answers as an Alternative to the Arbitrary Setting of Observed-Score Standards in Competency Testing

Sorel Cahan, Nora Cohen

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The probabilities of the two types of classification errors in competency testing are not equally manipulable. Whereas testers can successfully minimize the probability of Type II error (misidentification of true “nonmasters” as “masters”), they are much less able to do so relative to Type I error (misidentification of true “masters” as “nonmasters”). Consequently, the proportion of identified nonmasters is likely to be artifactually high. The currently used method for coping with this problem is the arbitrary setting of observed-score standards below 100%. This paper offers an alternative solution which does not involve arbitrary decisions -namely, self-correction of wrong answers. The paper presents the rationale underlying this solution and discusses its application.

Original languageEnglish
Pages (from-to)7-13
Number of pages7
JournalEducational and Psychological Measurement
Volume50
Issue number1
DOIs
StatePublished - Mar 1990

Fingerprint

Dive into the research topics of 'Self-Correction of Wrong Answers as an Alternative to the Arbitrary Setting of Observed-Score Standards in Competency Testing'. Together they form a unique fingerprint.

Cite this